Sep 23, 2025 12:53:44 AM org.apache.karaf.main.Main launch INFO: Installing and starting initial bundles Sep 23, 2025 12:53:44 AM org.apache.karaf.main.Main launch INFO: All initial bundles installed and set to start Sep 23, 2025 12:53:44 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Trying to lock /tmp/karaf-0.23.0/lock Sep 23, 2025 12:53:44 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Lock acquired Sep 23, 2025 12:53:44 AM org.apache.karaf.main.Main$KarafLockCallback lockAcquired INFO: Lock acquired. Setting startlevel to 100 2025-09-23T00:53:45,247 | INFO | CM Configuration Updater (Update: pid=org.ops4j.pax.logging) | EventAdminConfigurationNotifier | 4 - org.ops4j.pax.logging.pax-logging-log4j2 - 2.2.8 | Logging configuration changed. (Event Admin service unavailable - no notification sent). 2025-09-23T00:53:46,543 | INFO | activator-1-thread-2 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Adding features: odl-openflowplugin-flow-services-rest/[0.20.0,0.20.0],odl-openflowplugin-app-bulk-o-matic/[0.20.0,0.20.0],odl-infrautils-ready/[7.1.4,7.1.4],odl-jolokia/[11.0.0,11.0.0],9a2f80c7-237c-4902-b1a1-981ef71e7707/[0,0.0.0] 2025-09-23T00:53:46,710 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Changes to perform: 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Region: root 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to install: 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.el/jakarta.el-api/3.0.3 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.enterprise/cdi-api/2.0.SP1 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.interceptor/javax.interceptor-api/1.2.2 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.transaction/javax.transaction-api/1.2 2025-09-23T00:53:46,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jasypt/1.9.3_1 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.javax-inject/1_3 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc/1.5.7 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-config/1.5.7 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-common/1.5.7 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-wrap/2.6.16/jar/uber 2025-09-23T00:53:46,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.jdbc/1.1.0 2025-09-23T00:53:46,713 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Installing bundles: 2025-09-23T00:53:46,714 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.el/jakarta.el-api/3.0.3 2025-09-23T00:53:46,716 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.enterprise/cdi-api/2.0.SP1 2025-09-23T00:53:46,718 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.interceptor/javax.interceptor-api/1.2.2 2025-09-23T00:53:46,719 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.transaction/javax.transaction-api/1.2 2025-09-23T00:53:46,720 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jasypt/1.9.3_1 2025-09-23T00:53:46,721 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.javax-inject/1_3 2025-09-23T00:53:46,722 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc/1.5.7 2025-09-23T00:53:46,722 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-config/1.5.7 2025-09-23T00:53:46,723 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-common/1.5.7 2025-09-23T00:53:46,724 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-wrap/2.6.16/jar/uber 2025-09-23T00:53:46,728 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.jdbc/1.1.0 2025-09-23T00:53:46,756 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Starting bundles: 2025-09-23T00:53:46,757 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.url.wrap/2.6.16 2025-09-23T00:53:46,761 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-09-23T00:53:46,763 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-09-23T00:53:46,767 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.interceptor-api/1.2.2 2025-09-23T00:53:46,768 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 2025-09-23T00:53:46,768 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 2025-09-23T00:53:46,768 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 2025-09-23T00:53:46,769 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.osgi.service.jdbc/1.1.0.202212101352 2025-09-23T00:53:46,769 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 2025-09-23T00:53:46,769 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 2025-09-23T00:53:46,775 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc/1.5.7 2025-09-23T00:53:46,785 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Done. 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Changes to perform: 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Region: root 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to uninstall: 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to install: 2025-09-23T00:53:49,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.checkerframework/checker-qual/3.49.3 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.code.gson/gson/2.13.1 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/guava/33.4.8-jre 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/failureaccess/1.0.3 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.googlecode.json-simple/json-simple/1.1.1 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.h2database/h2/2.3.232 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.rabbitmq/amqp-client/5.25.0 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/config/1.4.3 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/ssl-config-core_2.13/0.6.1 2025-09-23T00:53:49,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-client/1.38.1 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-driver/1.38.1 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-core/4.2.32 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-graphite/4.2.32 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-healthchecks/4.2.32 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jmx/4.2.32 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jvm/4.2.32 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-buffer/4.2.2.Final 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-base/4.2.2.Final 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-compression/4.2.2.Final 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http/4.2.2.Final 2025-09-23T00:53:49,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http2/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-common/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-handler/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-resolver/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-classes-epoll/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-epoll/4.2.2.Final/jar/linux-x86_64 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-unix-common/4.2.2.Final 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.activation/jakarta.activation-api/1.2.2 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.annotation/jakarta.annotation-api/1.3.5 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.servlet/jakarta.servlet-api/4.0.4 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.validation/jakarta.validation-api/2.0.2 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.ws.rs/jakarta.ws.rs-api/2.1.6 2025-09-23T00:53:49,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.javassist/javassist/3.30.2-GA 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.servlet/javax.servlet-api/3.1.0 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.websocket/jakarta.websocket-api/1.1.2 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/karaf.branding/14.1.0 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.lz4/lz4-java/1.8.0 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:net.bytebuddy/byte-buddy/1.17.5 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.agrona/agrona/1.15.2 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.antlr/antlr4-runtime/4.13.2 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.3.2 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.10.3 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.api/1.1.5 2025-09-23T00:53:49,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.api/1.2.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.core/1.2.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.core/1.1.8 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.whiteboard/1.2.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.proxy/org.apache.aries.proxy/1.1.14 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.quiesce/org.apache.aries.quiesce.api/1.0.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries/org.apache.aries.util/1.1.3 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-collections/commons-collections/3.2.2 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-beanutils/commons-beanutils/1.11.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-codec/commons-codec/1.15 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-lang3/3.17.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-text/1.13.0 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.felix/org.apache.felix.scr/2.2.6 2025-09-23T00:53:49,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.geronimo.specs/geronimo-atinject_1.0_spec/1.2 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.blueprintstate/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.core/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.config/org.apache.karaf.config.command/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.blueprint/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.features/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.kar/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.wrap/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.boot/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.core/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.features/org.apache.karaf.features.command/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.http/org.apache.karaf.http.core/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.instance/org.apache.karaf.instance.core/4.4.7 2025-09-23T00:53:49,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.config/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.modules/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jdbc/org.apache.karaf.jdbc.core/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.kar/org.apache.karaf.kar.core/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.log/org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.management/org.apache.karaf.management.server/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.package/org.apache.karaf.package.core/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.management/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.state/4.4.7 2025-09-23T00:53:49,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.service/org.apache.karaf.service.core/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.commands/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.core/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.table/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.system/org.apache.karaf.system.core/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.web/org.apache.karaf.web.core/4.4.7 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-osgi/2.14.0 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-scp/2.14.0 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-sftp/2.14.0 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jdt/ecj/3.26.0 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-client/9.4.57.v20241219 2025-09-23T00:53:49,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-continuation/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-http/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-io/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jaas/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jmx/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-security/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-server/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlet/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlets/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util-ajax/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-xml/9.4.57.v20241219 2025-09-23T00:53:49,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-api/2.6.1 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2.external/aopalliance-repackaged/2.6.1 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-locator/2.6.1 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/osgi-resource-locator/1.0.3 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-utils/2.6.1 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet-core/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-client/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-common/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-server/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.inject/jersey-hk2/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.media/jersey-media-sse/2.47 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jline/jline/3.21.0 2025-09-23T00:53:49,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jolokia/jolokia-osgi/1.7.2 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jspecify/jspecify/1.0.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm/9.7.1 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-commons/9.7.1 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-tree/9.7.1 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-analysis/9.7.1 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-util/9.7.1 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-authn-api/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-cert/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service-impl/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-filterchain/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-idm-store-h2/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-jetty-auth-log-filter/0.21.0 2025-09-23T00:53:49,116 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-api/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-impl/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/repackaged-shiro/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro-api/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-tokenauthrealm/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-api/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-osgi-impl/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-api/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-jersey2/0.21.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/atomix-storage/11.0.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/blueprint/11.0.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-api/11.0.0 2025-09-23T00:53:49,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-client/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-dom-api/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-mgmt-api/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/eos-dom-akka/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-api/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-journal/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-spi/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/repackaged-pekko/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-raft/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-segmented-journal/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-api/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-impl/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-clustering-commons/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-common-util/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-distributed-datastore/11.0.0 2025-09-23T00:53:49,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-remoterpc-connector/11.0.0 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-api/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-impl/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-shell/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-api/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-impl/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/infrautils-util/7.1.4 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-dom-adapter/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-util/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-crypt-hash/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-encryption-algs/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-key-exchange-algs/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-mac-algs/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-public-key-algs/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-tls-cipher-suite-algs/14.0.13 2025-09-23T00:53:49,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6241/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6243/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6470/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-inet-types/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-yang-types/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7407-ietf-x509-cert-to-name/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7952/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf-monitoring/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8072/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8341/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-datastores/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-origin/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8343/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8344/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8525/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8526/14.0.13 2025-09-23T00:53:49,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8528/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8529/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8639/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8650/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9640/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9641/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9642/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-client/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-common/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-server/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-client/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-common/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-server/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-client/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-common/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-server/14.0.13 2025-09-23T00:53:49,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-adapter/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-spi/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-common-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-broker/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-schema-osgi/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-spi/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-common-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-dom-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-api/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-impl/14.0.13 2025-09-23T00:53:49,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/general-entity/14.0.13 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-topology/2013.10.21.26.13 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-type-util/14.0.13 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/opendaylight-l2-types/2013.08.27.26.13 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/yang-ext/2013.09.07.26.13 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/databind/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-dom-api/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-api/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-none/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/draft-ietf-restconf-server/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/rfc5277/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/sal-remote/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-api/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-common-mdsal/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/odl-device-notification/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-api/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-mdsal-spi/9.0.0 2025-09-23T00:53:49,123 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-nb/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-api/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-jaxrs/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-mdsal/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-spi/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-subscription/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/sal-remote-impl/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/shaded-sshd/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-api/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-http/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-ssh/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tcp/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tls/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-api/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-none/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/yanglib-mdsal-writer/9.0.0 2025-09-23T00:53:49,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/bundles-diag/14.1.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-api/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-api/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-impl/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/bulk-o-matic/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/device-ownership-service/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/forwardingrules-manager/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/lldp-speaker/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/of-switch-config-pusher/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/reconciliation-framework/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-lldp-discovery/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-manager/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-blueprint-config/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-common/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-api/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-onf/0.20.0 2025-09-23T00:53:49,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-impl/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.libraries/liblldp/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-base/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-service/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-statistics/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-inventory/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-topology/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-blueprint-config/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-api/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-impl/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-spi/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-util/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-api/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-impl/0.20.0 2025-09-23T00:53:49,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-shell/0.20.0 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-api/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-dynamic/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-osgi/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-spi/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-generator/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-loader/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-model/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-reflect/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-api/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-osgi/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-spi/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-spec/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/codegen-extensions/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/concepts/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-model-api/14.0.14 2025-09-23T00:53:49,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-model-api/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-parser-support/14.0.14 2025-09-23T00:53:49,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-model-api/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-parser-support/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-model-api/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-parser-support/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/util/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common-netty/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-api/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-binfmt/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-gson/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-xml/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-impl/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-spi/14.0.14 2025-09-23T00:53:49,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-transform/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-api/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-ri/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-spi/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-util/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-ir/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-api/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-export/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-ri/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-spi/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-util/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-api/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-impl/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-reactor/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-rfc7950/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-spi/14.0.14 2025-09-23T00:53:49,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-api/14.0.14 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-fs/14.0.14 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-spi/14.0.14 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-api/14.0.14 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-impl/14.0.14 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-war/2.6.16/jar/uber 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-api/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-el2/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-servlet31/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-war/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jetty/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jsp/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-runtime/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-spi/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-tomcat-common/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-websocket/8.0.30 2025-09-23T00:53:49,131 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.component/1.5.1 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.owasp.encoder/encoder/1.3.1 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang.modules/scala-parser-combinators_2.13/1.1.2 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-library/2.13.16 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-reflect/2.13.16 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.reactivestreams/reactive-streams/1.0.4 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.codehaus.woodstox/stax2-api/4.2.2 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:tech.pantheon.triemap/triemap/1.3.2 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:net.java.dev.stax-utils/stax-utils/20070216 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:org.lmdbjava/lmdbjava/0.7.0 2025-09-23T00:53:49,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Stopping bundles: 2025-09-23T00:53:49,133 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 2025-09-23T00:53:49,134 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 2025-09-23T00:53:49,134 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-09-23T00:53:49,134 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 2025-09-23T00:53:49,135 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 2025-09-23T00:53:49,135 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-09-23T00:53:49,135 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 2025-09-23T00:53:49,136 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Uninstalling bundles: 2025-09-23T00:53:49,136 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-09-23T00:53:49,138 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Installing bundles: 2025-09-23T00:53:49,138 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.checkerframework/checker-qual/3.49.3 2025-09-23T00:53:49,140 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.code.gson/gson/2.13.1 2025-09-23T00:53:49,141 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/guava/33.4.8-jre 2025-09-23T00:53:49,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/failureaccess/1.0.3 2025-09-23T00:53:49,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.googlecode.json-simple/json-simple/1.1.1 2025-09-23T00:53:49,147 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.h2database/h2/2.3.232 2025-09-23T00:53:49,151 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.rabbitmq/amqp-client/5.25.0 2025-09-23T00:53:49,153 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/config/1.4.3 2025-09-23T00:53:49,154 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/ssl-config-core_2.13/0.6.1 2025-09-23T00:53:49,156 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-client/1.38.1 2025-09-23T00:53:49,157 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-driver/1.38.1 2025-09-23T00:53:49,158 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-core/4.2.32 2025-09-23T00:53:49,159 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-graphite/4.2.32 2025-09-23T00:53:49,160 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-healthchecks/4.2.32 2025-09-23T00:53:49,160 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jmx/4.2.32 2025-09-23T00:53:49,161 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jvm/4.2.32 2025-09-23T00:53:49,162 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-buffer/4.2.2.Final 2025-09-23T00:53:49,163 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-base/4.2.2.Final 2025-09-23T00:53:49,164 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-compression/4.2.2.Final 2025-09-23T00:53:49,165 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http/4.2.2.Final 2025-09-23T00:53:49,167 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http2/4.2.2.Final 2025-09-23T00:53:49,169 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-common/4.2.2.Final 2025-09-23T00:53:49,170 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-handler/4.2.2.Final 2025-09-23T00:53:49,172 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-resolver/4.2.2.Final 2025-09-23T00:53:49,173 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport/4.2.2.Final 2025-09-23T00:53:49,175 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-classes-epoll/4.2.2.Final 2025-09-23T00:53:49,176 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-epoll/4.2.2.Final/jar/linux-x86_64 2025-09-23T00:53:49,177 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-unix-common/4.2.2.Final 2025-09-23T00:53:49,178 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.activation/jakarta.activation-api/1.2.2 2025-09-23T00:53:49,179 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.annotation/jakarta.annotation-api/1.3.5 2025-09-23T00:53:49,180 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.servlet/jakarta.servlet-api/4.0.4 2025-09-23T00:53:49,181 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.validation/jakarta.validation-api/2.0.2 2025-09-23T00:53:49,182 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.ws.rs/jakarta.ws.rs-api/2.1.6 2025-09-23T00:53:49,183 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.javassist/javassist/3.30.2-GA 2025-09-23T00:53:49,185 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.servlet/javax.servlet-api/3.1.0 2025-09-23T00:53:49,185 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.websocket/jakarta.websocket-api/1.1.2 2025-09-23T00:53:49,186 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/karaf.branding/14.1.0 2025-09-23T00:53:49,187 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.lz4/lz4-java/1.8.0 2025-09-23T00:53:49,188 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:net.bytebuddy/byte-buddy/1.17.5 2025-09-23T00:53:49,202 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.agrona/agrona/1.15.2 2025-09-23T00:53:49,204 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.antlr/antlr4-runtime/4.13.2 2025-09-23T00:53:49,205 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1 2025-09-23T00:53:49,206 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.3.2 2025-09-23T00:53:49,207 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.10.3 2025-09-23T00:53:49,209 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.api/1.1.5 2025-09-23T00:53:49,209 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.api/1.2.0 2025-09-23T00:53:49,210 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.core/1.2.0 2025-09-23T00:53:49,211 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.core/1.1.8 2025-09-23T00:53:49,212 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.whiteboard/1.2.0 2025-09-23T00:53:49,213 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.proxy/org.apache.aries.proxy/1.1.14 2025-09-23T00:53:49,213 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.quiesce/org.apache.aries.quiesce.api/1.0.0 2025-09-23T00:53:49,214 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries/org.apache.aries.util/1.1.3 2025-09-23T00:53:49,233 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-collections/commons-collections/3.2.2 2025-09-23T00:53:49,236 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-beanutils/commons-beanutils/1.11.0 2025-09-23T00:53:49,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-codec/commons-codec/1.15 2025-09-23T00:53:49,243 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-lang3/3.17.0 2025-09-23T00:53:49,245 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-text/1.13.0 2025-09-23T00:53:49,246 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.felix/org.apache.felix.scr/2.2.6 2025-09-23T00:53:49,247 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.geronimo.specs/geronimo-atinject_1.0_spec/1.2 2025-09-23T00:53:49,248 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.blueprintstate/4.4.7 2025-09-23T00:53:49,249 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.core/4.4.7 2025-09-23T00:53:49,250 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.config/org.apache.karaf.config.command/4.4.7 2025-09-23T00:53:49,251 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.blueprint/4.4.7 2025-09-23T00:53:49,252 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.features/4.4.7 2025-09-23T00:53:49,253 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.kar/4.4.7 2025-09-23T00:53:49,254 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.wrap/4.4.7 2025-09-23T00:53:49,255 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.boot/4.4.7 2025-09-23T00:53:49,255 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.core/4.4.7 2025-09-23T00:53:49,256 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.features/org.apache.karaf.features.command/4.4.7 2025-09-23T00:53:49,257 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.http/org.apache.karaf.http.core/4.4.7 2025-09-23T00:53:49,260 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.instance/org.apache.karaf.instance.core/4.4.7 2025-09-23T00:53:49,261 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:49,262 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.config/4.4.7 2025-09-23T00:53:49,262 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.modules/4.4.7 2025-09-23T00:53:49,265 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jdbc/org.apache.karaf.jdbc.core/4.4.7 2025-09-23T00:53:49,266 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.kar/org.apache.karaf.kar.core/4.4.7 2025-09-23T00:53:49,267 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.log/org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:49,268 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.management/org.apache.karaf.management.server/4.4.7 2025-09-23T00:53:49,269 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.package/org.apache.karaf.package.core/4.4.7 2025-09-23T00:53:49,270 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.management/4.4.7 2025-09-23T00:53:49,271 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.state/4.4.7 2025-09-23T00:53:49,272 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.service/org.apache.karaf.service.core/4.4.7 2025-09-23T00:53:49,272 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.commands/4.4.7 2025-09-23T00:53:49,273 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.4.7 2025-09-23T00:53:49,275 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.core/4.4.7 2025-09-23T00:53:49,277 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.4.7 2025-09-23T00:53:49,278 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.table/4.4.7 2025-09-23T00:53:49,279 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.system/org.apache.karaf.system.core/4.4.7 2025-09-23T00:53:49,280 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.web/org.apache.karaf.web.core/4.4.7 2025-09-23T00:53:49,281 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-osgi/2.14.0 2025-09-23T00:53:49,286 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-scp/2.14.0 2025-09-23T00:53:49,288 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-sftp/2.14.0 2025-09-23T00:53:49,289 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jdt/ecj/3.26.0 2025-09-23T00:53:49,336 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-client/9.4.57.v20241219 2025-09-23T00:53:49,340 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-continuation/9.4.57.v20241219 2025-09-23T00:53:49,341 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-http/9.4.57.v20241219 2025-09-23T00:53:49,344 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-io/9.4.57.v20241219 2025-09-23T00:53:49,345 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jaas/9.4.57.v20241219 2025-09-23T00:53:49,346 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jmx/9.4.57.v20241219 2025-09-23T00:53:49,348 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-security/9.4.57.v20241219 2025-09-23T00:53:49,349 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-server/9.4.57.v20241219 2025-09-23T00:53:49,351 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlet/9.4.57.v20241219 2025-09-23T00:53:49,352 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlets/9.4.57.v20241219 2025-09-23T00:53:49,354 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util/9.4.57.v20241219 2025-09-23T00:53:49,356 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util-ajax/9.4.57.v20241219 2025-09-23T00:53:49,357 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-xml/9.4.57.v20241219 2025-09-23T00:53:49,358 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-api/2.6.1 2025-09-23T00:53:49,359 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2.external/aopalliance-repackaged/2.6.1 2025-09-23T00:53:49,359 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-locator/2.6.1 2025-09-23T00:53:49,361 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/osgi-resource-locator/1.0.3 2025-09-23T00:53:49,361 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-utils/2.6.1 2025-09-23T00:53:49,362 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet/2.47 2025-09-23T00:53:49,363 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet-core/2.47 2025-09-23T00:53:49,364 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-client/2.47 2025-09-23T00:53:49,365 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-common/2.47 2025-09-23T00:53:49,368 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-server/2.47 2025-09-23T00:53:49,371 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.inject/jersey-hk2/2.47 2025-09-23T00:53:49,371 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.media/jersey-media-sse/2.47 2025-09-23T00:53:49,372 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jline/jline/3.21.0 2025-09-23T00:53:49,375 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jolokia/jolokia-osgi/1.7.2 2025-09-23T00:53:49,376 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jspecify/jspecify/1.0.0 2025-09-23T00:53:49,376 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm/9.7.1 2025-09-23T00:53:49,377 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-commons/9.7.1 2025-09-23T00:53:49,378 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-tree/9.7.1 2025-09-23T00:53:49,379 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-analysis/9.7.1 2025-09-23T00:53:49,379 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-util/9.7.1 2025-09-23T00:53:49,380 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-authn-api/0.21.0 2025-09-23T00:53:49,381 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-cert/0.21.0 2025-09-23T00:53:49,382 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service/0.21.0 2025-09-23T00:53:49,383 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service-impl/0.21.0 2025-09-23T00:53:49,384 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-filterchain/0.21.0 2025-09-23T00:53:49,385 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-idm-store-h2/0.21.0 2025-09-23T00:53:49,385 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-jetty-auth-log-filter/0.21.0 2025-09-23T00:53:49,386 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-api/0.21.0 2025-09-23T00:53:49,387 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-impl/0.21.0 2025-09-23T00:53:49,388 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/repackaged-shiro/0.21.0 2025-09-23T00:53:49,390 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro/0.21.0 2025-09-23T00:53:49,392 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro-api/0.21.0 2025-09-23T00:53:49,393 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-tokenauthrealm/0.21.0 2025-09-23T00:53:49,393 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-api/0.21.0 2025-09-23T00:53:49,394 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-osgi-impl/0.21.0 2025-09-23T00:53:49,395 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-api/0.21.0 2025-09-23T00:53:49,396 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-jersey2/0.21.0 2025-09-23T00:53:49,397 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/atomix-storage/11.0.0 2025-09-23T00:53:49,398 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/blueprint/11.0.0 2025-09-23T00:53:49,399 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-api/11.0.0 2025-09-23T00:53:49,400 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-client/11.0.0 2025-09-23T00:53:49,401 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-dom-api/11.0.0 2025-09-23T00:53:49,402 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-mgmt-api/11.0.0 2025-09-23T00:53:49,402 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/eos-dom-akka/11.0.0 2025-09-23T00:53:49,404 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-api/11.0.0 2025-09-23T00:53:49,405 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-journal/11.0.0 2025-09-23T00:53:49,405 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-spi/11.0.0 2025-09-23T00:53:49,406 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/repackaged-pekko/11.0.0 2025-09-23T00:53:49,430 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-raft/11.0.0 2025-09-23T00:53:49,432 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-segmented-journal/11.0.0 2025-09-23T00:53:49,433 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-api/11.0.0 2025-09-23T00:53:49,435 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-impl/11.0.0 2025-09-23T00:53:49,436 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-clustering-commons/11.0.0 2025-09-23T00:53:49,437 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-common-util/11.0.0 2025-09-23T00:53:49,438 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-distributed-datastore/11.0.0 2025-09-23T00:53:49,441 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-remoterpc-connector/11.0.0 2025-09-23T00:53:49,442 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-api/7.1.4 2025-09-23T00:53:49,443 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-impl/7.1.4 2025-09-23T00:53:49,444 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-shell/7.1.4 2025-09-23T00:53:49,444 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-api/7.1.4 2025-09-23T00:53:49,445 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-impl/7.1.4 2025-09-23T00:53:49,446 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/infrautils-util/7.1.4 2025-09-23T00:53:49,447 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-dom-adapter/14.0.13 2025-09-23T00:53:49,448 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-util/14.0.13 2025-09-23T00:53:49,449 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-crypt-hash/14.0.13 2025-09-23T00:53:49,450 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-encryption-algs/14.0.13 2025-09-23T00:53:49,451 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-key-exchange-algs/14.0.13 2025-09-23T00:53:49,451 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-mac-algs/14.0.13 2025-09-23T00:53:49,452 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-public-key-algs/14.0.13 2025-09-23T00:53:49,453 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-tls-cipher-suite-algs/14.0.13 2025-09-23T00:53:49,454 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6241/14.0.13 2025-09-23T00:53:49,456 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6243/14.0.13 2025-09-23T00:53:49,457 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6470/14.0.13 2025-09-23T00:53:49,458 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-inet-types/14.0.13 2025-09-23T00:53:49,459 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-yang-types/14.0.13 2025-09-23T00:53:49,460 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7407-ietf-x509-cert-to-name/14.0.13 2025-09-23T00:53:49,460 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7952/14.0.13 2025-09-23T00:53:49,461 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf/14.0.13 2025-09-23T00:53:49,462 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf-monitoring/14.0.13 2025-09-23T00:53:49,463 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8072/14.0.13 2025-09-23T00:53:49,465 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8341/14.0.13 2025-09-23T00:53:49,466 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-datastores/14.0.13 2025-09-23T00:53:49,466 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-origin/14.0.13 2025-09-23T00:53:49,467 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8343/14.0.13 2025-09-23T00:53:49,468 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8344/14.0.13 2025-09-23T00:53:49,470 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8525/14.0.13 2025-09-23T00:53:49,471 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8526/14.0.13 2025-09-23T00:53:49,472 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8528/14.0.13 2025-09-23T00:53:49,473 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8529/14.0.13 2025-09-23T00:53:49,474 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8639/14.0.13 2025-09-23T00:53:49,476 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8650/14.0.13 2025-09-23T00:53:49,477 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9640/14.0.13 2025-09-23T00:53:49,479 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9641/14.0.13 2025-09-23T00:53:49,481 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9642/14.0.13 2025-09-23T00:53:49,483 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-client/14.0.13 2025-09-23T00:53:49,484 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-common/14.0.13 2025-09-23T00:53:49,485 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-server/14.0.13 2025-09-23T00:53:49,485 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-client/14.0.13 2025-09-23T00:53:49,486 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-common/14.0.13 2025-09-23T00:53:49,488 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-server/14.0.13 2025-09-23T00:53:49,489 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-client/14.0.13 2025-09-23T00:53:49,490 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-common/14.0.13 2025-09-23T00:53:49,492 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-server/14.0.13 2025-09-23T00:53:49,493 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-adapter/14.0.13 2025-09-23T00:53:49,494 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-api/14.0.13 2025-09-23T00:53:49,494 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-spi/14.0.13 2025-09-23T00:53:49,495 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-common-api/14.0.13 2025-09-23T00:53:49,496 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-api/14.0.13 2025-09-23T00:53:49,497 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-broker/14.0.13 2025-09-23T00:53:49,498 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-schema-osgi/14.0.13 2025-09-23T00:53:49,498 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-spi/14.0.13 2025-09-23T00:53:49,499 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-api/14.0.13 2025-09-23T00:53:49,500 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-common-api/14.0.13 2025-09-23T00:53:49,501 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-dom-api/14.0.13 2025-09-23T00:53:49,501 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-api/14.0.13 2025-09-23T00:53:49,502 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-impl/14.0.13 2025-09-23T00:53:49,503 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/general-entity/14.0.13 2025-09-23T00:53:49,504 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-topology/2013.10.21.26.13 2025-09-23T00:53:49,504 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-type-util/14.0.13 2025-09-23T00:53:49,505 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/opendaylight-l2-types/2013.08.27.26.13 2025-09-23T00:53:49,506 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/yang-ext/2013.09.07.26.13 2025-09-23T00:53:49,507 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/databind/9.0.0 2025-09-23T00:53:49,508 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-dom-api/9.0.0 2025-09-23T00:53:49,508 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-api/9.0.0 2025-09-23T00:53:49,509 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-none/9.0.0 2025-09-23T00:53:49,509 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/draft-ietf-restconf-server/9.0.0 2025-09-23T00:53:49,512 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/rfc5277/9.0.0 2025-09-23T00:53:49,513 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/sal-remote/9.0.0 2025-09-23T00:53:49,513 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-api/9.0.0 2025-09-23T00:53:49,514 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-common-mdsal/9.0.0 2025-09-23T00:53:49,515 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/odl-device-notification/9.0.0 2025-09-23T00:53:49,516 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-api/9.0.0 2025-09-23T00:53:49,517 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-mdsal-spi/9.0.0 2025-09-23T00:53:49,518 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-nb/9.0.0 2025-09-23T00:53:49,518 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server/9.0.0 2025-09-23T00:53:49,520 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-api/9.0.0 2025-09-23T00:53:49,521 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-jaxrs/9.0.0 2025-09-23T00:53:49,522 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-mdsal/9.0.0 2025-09-23T00:53:49,523 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-spi/9.0.0 2025-09-23T00:53:49,524 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-subscription/9.0.0 2025-09-23T00:53:49,525 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/sal-remote-impl/9.0.0 2025-09-23T00:53:49,526 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/shaded-sshd/9.0.0 2025-09-23T00:53:49,532 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-api/9.0.0 2025-09-23T00:53:49,533 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-http/9.0.0 2025-09-23T00:53:49,535 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-ssh/9.0.0 2025-09-23T00:53:49,537 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tcp/9.0.0 2025-09-23T00:53:49,538 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tls/9.0.0 2025-09-23T00:53:49,539 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-api/9.0.0 2025-09-23T00:53:49,539 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-none/9.0.0 2025-09-23T00:53:49,540 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/yanglib-mdsal-writer/9.0.0 2025-09-23T00:53:49,541 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/bundles-diag/14.1.0 2025-09-23T00:53:49,542 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin/0.20.0 2025-09-23T00:53:49,545 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-api/0.20.0 2025-09-23T00:53:49,547 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-api/0.20.0 2025-09-23T00:53:49,548 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-impl/0.20.0 2025-09-23T00:53:49,549 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/bulk-o-matic/0.20.0 2025-09-23T00:53:49,550 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/device-ownership-service/0.20.0 2025-09-23T00:53:49,551 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/forwardingrules-manager/0.20.0 2025-09-23T00:53:49,552 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/lldp-speaker/0.20.0 2025-09-23T00:53:49,553 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/of-switch-config-pusher/0.20.0 2025-09-23T00:53:49,554 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/reconciliation-framework/0.20.0 2025-09-23T00:53:49,555 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-lldp-discovery/0.20.0 2025-09-23T00:53:49,556 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-manager/0.20.0 2025-09-23T00:53:49,557 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-blueprint-config/0.20.0 2025-09-23T00:53:49,557 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-common/0.20.0 2025-09-23T00:53:49,558 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-api/0.20.0 2025-09-23T00:53:49,561 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-onf/0.20.0 2025-09-23T00:53:49,563 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-impl/0.20.0 2025-09-23T00:53:49,567 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.libraries/liblldp/0.20.0 2025-09-23T00:53:49,568 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-base/0.20.0 2025-09-23T00:53:49,574 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-service/0.20.0 2025-09-23T00:53:49,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-statistics/0.20.0 2025-09-23T00:53:49,583 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-inventory/0.20.0 2025-09-23T00:53:49,584 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-topology/0.20.0 2025-09-23T00:53:49,585 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-blueprint-config/0.20.0 2025-09-23T00:53:49,585 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-api/0.20.0 2025-09-23T00:53:49,595 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-impl/0.20.0 2025-09-23T00:53:49,598 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-spi/0.20.0 2025-09-23T00:53:49,599 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-util/0.20.0 2025-09-23T00:53:49,600 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-api/0.20.0 2025-09-23T00:53:49,601 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-impl/0.20.0 2025-09-23T00:53:49,602 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-shell/0.20.0 2025-09-23T00:53:49,602 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-api/14.0.14 2025-09-23T00:53:49,603 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-dynamic/14.0.14 2025-09-23T00:53:49,604 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-osgi/14.0.14 2025-09-23T00:53:49,605 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-spi/14.0.14 2025-09-23T00:53:49,606 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-generator/14.0.14 2025-09-23T00:53:49,607 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-loader/14.0.14 2025-09-23T00:53:49,608 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-model/14.0.14 2025-09-23T00:53:49,608 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-reflect/14.0.14 2025-09-23T00:53:49,609 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-api/14.0.14 2025-09-23T00:53:49,610 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-osgi/14.0.14 2025-09-23T00:53:49,611 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-spi/14.0.14 2025-09-23T00:53:49,611 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-spec/14.0.14 2025-09-23T00:53:49,613 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/codegen-extensions/14.0.14 2025-09-23T00:53:49,613 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/concepts/14.0.14 2025-09-23T00:53:49,614 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-model-api/14.0.14 2025-09-23T00:53:49,614 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-parser-support/14.0.14 2025-09-23T00:53:49,615 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-model-api/14.0.14 2025-09-23T00:53:49,616 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-parser-support/14.0.14 2025-09-23T00:53:49,616 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-model-api/14.0.14 2025-09-23T00:53:49,617 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-parser-support/14.0.14 2025-09-23T00:53:49,618 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-model-api/14.0.14 2025-09-23T00:53:49,618 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-parser-support/14.0.14 2025-09-23T00:53:49,619 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-model-api/14.0.14 2025-09-23T00:53:49,619 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-parser-support/14.0.14 2025-09-23T00:53:49,620 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-model-api/14.0.14 2025-09-23T00:53:49,621 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-parser-support/14.0.14 2025-09-23T00:53:49,621 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-model-api/14.0.14 2025-09-23T00:53:49,622 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-parser-support/14.0.14 2025-09-23T00:53:49,623 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-model-api/14.0.14 2025-09-23T00:53:49,624 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-parser-support/14.0.14 2025-09-23T00:53:49,624 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-model-api/14.0.14 2025-09-23T00:53:49,625 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-parser-support/14.0.14 2025-09-23T00:53:49,626 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-model-api/14.0.14 2025-09-23T00:53:49,626 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-parser-support/14.0.14 2025-09-23T00:53:49,627 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/util/14.0.14 2025-09-23T00:53:49,628 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common/14.0.14 2025-09-23T00:53:49,629 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common-netty/14.0.14 2025-09-23T00:53:49,629 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-api/14.0.14 2025-09-23T00:53:49,630 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-binfmt/14.0.14 2025-09-23T00:53:49,631 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-gson/14.0.14 2025-09-23T00:53:49,632 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-xml/14.0.14 2025-09-23T00:53:49,633 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-impl/14.0.14 2025-09-23T00:53:49,634 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-spi/14.0.14 2025-09-23T00:53:49,634 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-transform/14.0.14 2025-09-23T00:53:49,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-api/14.0.14 2025-09-23T00:53:49,636 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-ri/14.0.14 2025-09-23T00:53:49,637 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-spi/14.0.14 2025-09-23T00:53:49,638 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-util/14.0.14 2025-09-23T00:53:49,638 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-ir/14.0.14 2025-09-23T00:53:49,639 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-api/14.0.14 2025-09-23T00:53:49,641 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-export/14.0.14 2025-09-23T00:53:49,641 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-ri/14.0.14 2025-09-23T00:53:49,642 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-spi/14.0.14 2025-09-23T00:53:49,643 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-util/14.0.14 2025-09-23T00:53:49,644 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-api/14.0.14 2025-09-23T00:53:49,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-impl/14.0.14 2025-09-23T00:53:49,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-reactor/14.0.14 2025-09-23T00:53:49,647 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-rfc7950/14.0.14 2025-09-23T00:53:49,649 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-spi/14.0.14 2025-09-23T00:53:49,649 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-api/14.0.14 2025-09-23T00:53:49,650 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-fs/14.0.14 2025-09-23T00:53:49,651 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-spi/14.0.14 2025-09-23T00:53:49,652 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-api/14.0.14 2025-09-23T00:53:49,653 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-impl/14.0.14 2025-09-23T00:53:49,654 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-war/2.6.16/jar/uber 2025-09-23T00:53:49,657 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-api/8.0.30 2025-09-23T00:53:49,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-el2/8.0.30 2025-09-23T00:53:49,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-servlet31/8.0.30 2025-09-23T00:53:49,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-war/8.0.30 2025-09-23T00:53:49,660 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/8.0.30 2025-09-23T00:53:49,661 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jetty/8.0.30 2025-09-23T00:53:49,662 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jsp/8.0.30 2025-09-23T00:53:49,664 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-runtime/8.0.30 2025-09-23T00:53:49,666 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-spi/8.0.30 2025-09-23T00:53:49,667 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-tomcat-common/8.0.30 2025-09-23T00:53:49,669 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-websocket/8.0.30 2025-09-23T00:53:49,669 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.component/1.5.1 2025-09-23T00:53:49,670 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.owasp.encoder/encoder/1.3.1 2025-09-23T00:53:49,670 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang.modules/scala-parser-combinators_2.13/1.1.2 2025-09-23T00:53:49,671 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-library/2.13.16 2025-09-23T00:53:49,680 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-reflect/2.13.16 2025-09-23T00:53:49,685 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.reactivestreams/reactive-streams/1.0.4 2025-09-23T00:53:49,685 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.codehaus.woodstox/stax2-api/4.2.2 2025-09-23T00:53:49,686 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:tech.pantheon.triemap/triemap/1.3.2 2025-09-23T00:53:49,687 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:net.java.dev.stax-utils/stax-utils/20070216 2025-09-23T00:53:49,688 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:org.lmdbjava/lmdbjava/0.7.0 2025-09-23T00:53:49,703 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-password-service-config.xml 2025-09-23T00:53:49,704 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/configuration/factory/pekko.conf 2025-09-23T00:53:49,705 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.controller.cluster.datastore.cfg 2025-09-23T00:53:49,710 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0//etc/org.jolokia.osgi.cfg 2025-09-23T00:53:49,710 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg 2025-09-23T00:53:49,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/default-openflow-connection-config.xml 2025-09-23T00:53:49,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/legacy-openflow-connection-config.xml 2025-09-23T00:53:49,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-cert-config.xml 2025-09-23T00:53:49,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/jetty-web.xml 2025-09-23T00:53:49,714 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.restconf.nb.rfc8040.cfg 2025-09-23T00:53:49,716 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-app-config.xml 2025-09-23T00:53:49,716 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-datastore-config.xml 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/bin/idmtool 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0//etc/org.opendaylight.aaa.filterchain.cfg 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Refreshing bundles: 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 (Attached fragments changed: [org.ops4j.pax.web.pax-web-compatibility-el2/8.0.30]) 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 (Wired to javax.el-api/3.0.3 which is being refreshed) 2025-09-23T00:53:49,717 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 (Wired to javax.enterprise.cdi-api/2.0.0.SP1 which is being refreshed) 2025-09-23T00:53:49,718 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 (Should be wired to: jakarta.servlet-api/4.0.0 (through [org.apache.servicemix.bundles.jasypt/1.9.3.1] osgi.wiring.package; resolution:=optional; filter:="(osgi.wiring.package=javax.servlet)")) 2025-09-23T00:53:49,718 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 (Bundle will be uninstalled) 2025-09-23T00:53:49,718 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 (Wired to org.apache.servicemix.bundles.jasypt/1.9.3.1 which is being refreshed) 2025-09-23T00:53:49,718 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 (Wired to javax.transaction-api/1.2.0 which is being refreshed) 2025-09-23T00:53:50,488 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Starting bundles: 2025-09-23T00:53:50,490 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm/9.7.1 2025-09-23T00:53:50,491 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.tree/9.7.1 2025-09-23T00:53:50,491 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.tree.analysis/9.7.1 2025-09-23T00:53:50,491 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.util/9.7.1 2025-09-23T00:53:50,491 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.commons/9.7.1 2025-09-23T00:53:50,492 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.proxy/1.1.14 2025-09-23T00:53:50,496 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.quiesce.api/1.0.0 2025-09-23T00:53:50,496 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.api/1.0.1 2025-09-23T00:53:50,497 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.core/1.10.3 2025-09-23T00:53:50,647 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.core/1.10.3 has been started 2025-09-23T00:53:50,648 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.cm/1.3.2 2025-09-23T00:53:50,667 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.cm/1.3.2 has been started 2025-09-23T00:53:50,668 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.blueprint/4.4.7 2025-09-23T00:53:50,674 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.wrap/4.4.7 2025-09-23T00:53:50,676 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.restconf.nb.rfc8040} from /tmp/karaf-0.23.0/etc/org.opendaylight.restconf.nb.rfc8040.cfg 2025-09-23T00:53:50,678 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.kar/4.4.7 2025-09-23T00:53:50,678 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.jolokia.osgi} from /tmp/karaf-0.23.0/etc/org.jolokia.osgi.cfg 2025-09-23T00:53:50,680 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.controller.cluster.datastore} from /tmp/karaf-0.23.0/etc/org.opendaylight.controller.cluster.datastore.cfg 2025-09-23T00:53:50,680 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.features/4.4.7 2025-09-23T00:53:50,681 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.openflowplugin} from /tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg 2025-09-23T00:53:50,682 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.aaa.filterchain} from /tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg 2025-09-23T00:53:50,691 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.continuation/9.4.57.v20241219 2025-09-23T00:53:50,693 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.osgi/2.14.0 2025-09-23T00:53:50,695 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.scp/2.14.0 2025-09-23T00:53:50,696 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.sftp/2.14.0 2025-09-23T00:53:50,697 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jline/3.21.0 2025-09-23T00:53:50,698 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.core/4.4.7 2025-09-23T00:53:50,730 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.deployer.kar/4.4.7 2025-09-23T00:53:50,733 | INFO | features-3-thread-1 | Activator | 120 - org.apache.karaf.shell.core - 4.4.7 | Not starting local console. To activate set karaf.startLocalConsole=true 2025-09-23T00:53:50,756 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.karaf.shell.core/4.4.7 has been started 2025-09-23T00:53:50,757 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:50,777 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.log.core/4.4.7. Missing service: [org.apache.karaf.log.core.LogService] 2025-09-23T00:53:50,777 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:50,783 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.service.core/4.4.7 2025-09-23T00:53:50,790 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.service.core/4.4.7 2025-09-23T00:53:50,791 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.util/9.4.57.v20241219 2025-09-23T00:53:50,786 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Unregistering commands for bundle org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:50,793 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.util.ajax/9.4.57.v20241219 2025-09-23T00:53:50,793 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.blueprint.api/1.2.0 2025-09-23T00:53:50,794 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.config.command/4.4.7 2025-09-23T00:53:50,796 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-09-23T00:53:50,801 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.config.command/4.4.7 2025-09-23T00:53:50,858 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.jmx/9.4.57.v20241219 2025-09-23T00:53:50,860 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.io/9.4.57.v20241219 2025-09-23T00:53:50,861 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.http/9.4.57.v20241219 2025-09-23T00:53:50,862 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.server/9.4.57.v20241219 2025-09-23T00:53:50,863 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.security/9.4.57.v20241219 2025-09-23T00:53:50,864 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.servlet/9.4.57.v20241219 2025-09-23T00:53:50,864 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.jaas/9.4.57.v20241219 2025-09-23T00:53:50,864 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.config/4.4.7 2025-09-23T00:53:50,871 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.websocket-api/1.1.2 2025-09-23T00:53:50,871 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.modules/4.4.7 2025-09-23T00:53:50,876 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:50,885 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:50,887 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:50,887 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T00:53:50,888 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.api/1.1.5 2025-09-23T00:53:50,889 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.core/1.1.8 2025-09-23T00:53:50,891 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Starting JMX OSGi agent 2025-09-23T00:53:50,900 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006] for service with service.id [15] 2025-09-23T00:53:50,901 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006] for service with service.id [39] 2025-09-23T00:53:50,903 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.servlets/9.4.57.v20241219 2025-09-23T00:53:50,904 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.bundle.core/4.4.7 2025-09-23T00:53:50,921 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.bundle.core/4.4.7 2025-09-23T00:53:50,922 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.bundle.blueprintstate/4.4.7 2025-09-23T00:53:50,929 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.osgi.service.component/1.5.1.202212101352 2025-09-23T00:53:50,929 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.felix.scr/2.2.6 2025-09-23T00:53:50,936 | INFO | features-3-thread-1 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Starting with globalExtender setting: false 2025-09-23T00:53:50,939 | INFO | features-3-thread-1 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Version = 2.2.6 2025-09-23T00:53:50,950 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.scr.state/4.4.7 2025-09-23T00:53:50,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.table/4.4.7 2025-09-23T00:53:51,002 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.servlet-api/4.0.0 2025-09-23T00:53:51,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-api/8.0.30 2025-09-23T00:53:51,006 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-spi/8.0.30 2025-09-23T00:53:51,007 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-websocket/8.0.30 2025-09-23T00:53:51,008 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.client/9.4.57.v20241219 2025-09-23T00:53:51,009 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.management.server/4.4.7 2025-09-23T00:53:51,018 | INFO | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | Setting java.rmi.server.hostname system property to 127.0.0.1 2025-09-23T00:53:51,019 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.scr.management/4.4.7 2025-09-23T00:53:51,023 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jdt.core.compiler.batch/3.26.0.v20210609-0549 2025-09-23T00:53:51,026 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.ssh/4.4.7 2025-09-23T00:53:51,060 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.4.7. Missing service: [org.apache.sshd.server.SshServer] 2025-09-23T00:53:51,060 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.features.command/4.4.7 2025-09-23T00:53:51,086 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.features.command/4.4.7 2025-09-23T00:53:51,088 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.kar.core/4.4.7 2025-09-23T00:53:51,101 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,102 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,103 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,104 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,104 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,104 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,104 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,118 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.kar.core/4.4.7 2025-09-23T00:53:51,119 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-09-23T00:53:51,130 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-09-23T00:53:51,132 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,133 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,133 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,133 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,133 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,133 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,134 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Unregistering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@4a442598 with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,134 | WARN | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Task rejected for JMX Notification dispatch of event [org.osgi.framework.ServiceEvent[source={javax.management.MBeanServer}={service.id=136, service.bundleid=113, service.scope=singleton}]] - Dispatcher may have been shutdown 2025-09-23T00:53:51,135 | WARN | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Task rejected for JMX Notification dispatch of event [org.osgi.framework.ServiceEvent[source={org.apache.karaf.kar.KarService}={service.id=138, service.bundleid=111, service.scope=singleton}]] - Dispatcher may have been shutdown 2025-09-23T00:53:51,135 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Deactivating the Apache Karaf ServiceComponentRuntime MBean 2025-09-23T00:53:51,144 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-jsp/8.0.30 2025-09-23T00:53:51,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-tomcat-common/8.0.30 2025-09-23T00:53:51,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.xml/9.4.57.v20241219 2025-09-23T00:53:51,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-jetty/8.0.30 2025-09-23T00:53:51,168 | INFO | features-3-thread-1 | log | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Logging initialized @8199ms to org.eclipse.jetty.util.log.Slf4jLog 2025-09-23T00:53:51,183 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-runtime/8.0.30 2025-09-23T00:53:51,279 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.web]) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because configuration has changed 2025-09-23T00:53:51,279 | INFO | features-3-thread-1 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | EventAdmin support enabled, WAB events will be posted to EventAdmin topics. 2025-09-23T00:53:51,280 | INFO | features-3-thread-1 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Pax Web Runtime started 2025-09-23T00:53:51,281 | INFO | paxweb-config-1-thread-1 (change config) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because ServerControllerFactory has been registered 2025-09-23T00:53:51,288 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.http.core/4.4.7 2025-09-23T00:53:51,328 | WARN | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | java.rmi.server.hostname system property is already set to 127.0.0.1. Apache Karaf doesn't override it 2025-09-23T00:53:51,354 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.http.core/4.4.7. Missing service: [org.apache.karaf.http.core.ProxyService] 2025-09-23T00:53:51,355 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-extender-war/8.0.30 2025-09-23T00:53:51,363 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,364 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,364 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,364 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,364 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,365 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,365 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2c2c6cdc with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=7b0dbff0-b342-40f8-82ec-fc87bc5fb006 2025-09-23T00:53:51,380 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-09-23T00:53:51,413 | INFO | features-3-thread-1 | Activator | 392 - org.ops4j.pax.web.pax-web-extender-war - 8.0.30 | Configuring WAR extender thread pool. Pool size = 3 2025-09-23T00:53:51,434 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.ssh/4.4.7 2025-09-23T00:53:51,447 | INFO | activator-1-thread-1 | DefaultIoServiceFactoryFactory | 125 - org.apache.sshd.osgi - 2.14.0 | No detected/configured IoServiceFactoryFactory; using Nio2ServiceFactoryFactory 2025-09-23T00:53:51,454 | INFO | paxweb-config-1-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Configuring server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-09-23T00:53:51,455 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Configuring JettyServerController{configuration=aecb3e04-3180-4b42-9be5-227c090c9536,state=UNCONFIGURED} 2025-09-23T00:53:51,455 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating Jetty server instance using configuration properties. 2025-09-23T00:53:51,482 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Processing Jetty configuration from files: [etc/jetty.xml] 2025-09-23T00:53:51,611 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.url.war/2.6.16 2025-09-23T00:53:51,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.system.core/4.4.7 2025-09-23T00:53:51,646 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.system.core/4.4.7 2025-09-23T00:53:51,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.whiteboard/1.2.0 2025-09-23T00:53:51,664 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-extender-whiteboard/8.0.30 2025-09-23T00:53:51,666 | INFO | features-3-thread-1 | Activator | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | Starting Pax Web Whiteboard Extender 2025-09-23T00:53:51,672 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Found configured connector "jetty-default": 0.0.0.0:8181 2025-09-23T00:53:51,673 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Using configured jetty-default@1e0157cc{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} as non secure connector for address: 0.0.0.0:8181 2025-09-23T00:53:51,675 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Eagerly starting Jetty thread pool QueuedThreadPool[qtp964829283]@39822063{STOPPED,0<=0<=200,i=0,r=-1,q=0}[NO_TRY] 2025-09-23T00:53:51,682 | INFO | paxweb-config-1-thread-1 (change controller) | JettyFactory | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding JMX support to Jetty server 2025-09-23T00:53:51,684 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.commands/4.4.7 2025-09-23T00:53:51,694 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-09-23T00:53:51,694 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-09-23T00:53:51,695 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.instance.core/4.4.7 2025-09-23T00:53:51,714 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.instance.core/4.4.7 2025-09-23T00:53:51,715 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.package.core/4.4.7 2025-09-23T00:53:51,722 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.package.core/4.4.7 2025-09-23T00:53:51,722 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.web.core/4.4.7 2025-09-23T00:53:51,726 | INFO | paxweb-config-1-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Starting server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-09-23T00:53:51,726 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting JettyServerController{configuration=aecb3e04-3180-4b42-9be5-227c090c9536,state=STOPPED} 2025-09-23T00:53:51,727 | INFO | paxweb-config-1-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Server@1bf96db1{STOPPED}[9.4.57.v20241219] 2025-09-23T00:53:51,728 | INFO | paxweb-config-1-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 21.0.5+11-Ubuntu-1ubuntu122.04 2025-09-23T00:53:51,729 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.web.core/4.4.7. Missing service: [org.apache.karaf.web.WebContainerService] 2025-09-23T00:53:51,729 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.diagnostic.core/4.4.7 2025-09-23T00:53:51,736 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.4.7 2025-09-23T00:53:51,736 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.blueprint.core/1.2.0 2025-09-23T00:53:51,744 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.blueprint/11.0.0 2025-09-23T00:53:51,746 | INFO | features-3-thread-1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Starting BlueprintBundleTracker 2025-09-23T00:53:51,755 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.karaf.shell.core_4.4.7 [120] was successfully created 2025-09-23T00:53:51,755 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.core_1.10.3 [79] was successfully created 2025-09-23T00:53:51,755 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.cm_1.3.2 [78] was successfully created 2025-09-23T00:53:51,757 | INFO | paxweb-config-1-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | DefaultSessionIdManager workerName=node0 2025-09-23T00:53:51,758 | INFO | paxweb-config-1-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | No SessionScavenger set, using defaults 2025-09-23T00:53:51,760 | INFO | paxweb-config-1-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | node0 Scavenging every 600000ms 2025-09-23T00:53:51,815 | INFO | paxweb-config-1-thread-1 (change controller) | AbstractConnector | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started jetty-default@1e0157cc{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} 2025-09-23T00:53:51,816 | INFO | paxweb-config-1-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started @8849ms 2025-09-23T00:53:51,818 | INFO | paxweb-config-1-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpService factory 2025-09-23T00:53:51,821 | INFO | paxweb-config-1-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.http.core_4.4.7 [105]] 2025-09-23T00:53:51,844 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.http.core/4.4.7 2025-09-23T00:53:51,849 | INFO | HttpService->WarExtender (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-war_8.0.30 [392]] 2025-09-23T00:53:51,855 | INFO | paxweb-config-1-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.web.core_4.4.7 [124]] 2025-09-23T00:53:51,856 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard_8.0.30 [393]] 2025-09-23T00:53:51,857 | INFO | paxweb-config-1-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpServiceRuntime 2025-09-23T00:53:51,863 | INFO | paxweb-config-1-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-09-23T00:53:51,864 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)}", size=2} 2025-09-23T00:53:51,864 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-09-23T00:53:51,879 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.web.core/4.4.7 2025-09-23T00:53:51,906 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.guava.failureaccess/1.0.3 2025-09-23T00:53:51,908 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.annotation-api/1.3.5 2025-09-23T00:53:51,909 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.guava/33.4.8.jre 2025-09-23T00:53:51,910 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.concepts/14.0.14 2025-09-23T00:53:51,910 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-common/14.0.14 2025-09-23T00:53:51,911 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-spec/14.0.14 2025-09-23T00:53:51,911 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | triemap/1.3.2 2025-09-23T00:53:51,912 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.util/14.0.14 2025-09-23T00:53:51,912 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-xpath-api/14.0.14 2025-09-23T00:53:51,912 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-api/14.0.14 2025-09-23T00:53:51,913 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-api/14.0.14 2025-09-23T00:53:51,913 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-api/14.0.14 2025-09-23T00:53:51,913 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-model/14.0.14 2025-09-23T00:53:51,914 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8040-model-api/14.0.14 2025-09-23T00:53:51,914 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-api/14.0.14 2025-09-23T00:53:51,915 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-api/14.0.14 2025-09-23T00:53:51,915 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-spi/14.0.14 2025-09-23T00:53:51,916 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.core/4.2.32 2025-09-23T00:53:51,916 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.jvm/4.2.32 2025-09-23T00:53:51,917 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.healthchecks/4.2.32 2025-09-23T00:53:51,918 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.external.aopalliance-repackaged/2.6.1 2025-09-23T00:53:51,918 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.servlet-api/3.1.0 2025-09-23T00:53:51,919 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jolokia.osgi/1.7.2 2025-09-23T00:53:51,922 | INFO | features-3-thread-1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.jolokia.osgi_1.7.2 [155]] 2025-09-23T00:53:51,928 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)} to o.o.p.w.s.j.i.PaxWebServletContextHandler@37a0daf1{/,null,STOPPED} 2025-09-23T00:53:51,933 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@37a0daf1{/,null,STOPPED} 2025-09-23T00:53:51,937 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c77d4d8,contexts=[{HS,OCM-5,context:1788736511,/}]} 2025-09-23T00:53:51,937 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c77d4d8,contexts=null}", size=3} 2025-09-23T00:53:51,938 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{HS,id=OCM-5,name='context:1788736511',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:1788736511',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@6a9df3ff}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@37a0daf1{/,null,STOPPED} 2025-09-23T00:53:51,939 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@37a0daf1{/,null,STOPPED} 2025-09-23T00:53:51,939 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c77d4d8,contexts=[{HS,OCM-5,context:1788736511,/}]} 2025-09-23T00:53:51,948 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/" with default Osgi Context OsgiContextModel{HS,id=OCM-5,name='context:1788736511',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:1788736511',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@6a9df3ff}} 2025-09-23T00:53:51,967 | INFO | paxweb-config-1-thread-1 | osgi | 155 - org.jolokia.osgi - 1.7.2 | No access restrictor found, access to any MBean is allowed 2025-09-23T00:53:51,994 | INFO | paxweb-config-1-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@37a0daf1{/,null,AVAILABLE} 2025-09-23T00:53:51,995 | INFO | paxweb-config-1-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{HS,id=OCM-5,name='context:1788736511',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:1788736511',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@6a9df3ff}}} as OSGi service for "/" context path 2025-09-23T00:53:51,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.ietf-type-util/14.0.13 2025-09-23T00:53:52,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-reflect/14.0.14 2025-09-23T00:53:52,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6991-ietf-yang-types/14.0.13 2025-09-23T00:53:52,001 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.osgi-resource-locator/1.0.3 2025-09-23T00:53:52,053 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.lang3/3.17.0 2025-09-23T00:53:52,054 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.libraries.liblldp/0.20.0 2025-09-23T00:53:52,055 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.scala-library/2.13.16.v20250107-233423-VFINAL-3f6bdae 2025-09-23T00:53:52,057 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.modules.scala-parser-combinators/1.1.2 2025-09-23T00:53:52,058 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | karaf.branding/14.1.0 2025-09-23T00:53:52,059 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | stax2-api/4.2.2 2025-09-23T00:53:52,059 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6991-ietf-inet-types/14.0.13 2025-09-23T00:53:52,060 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8343/14.0.13 2025-09-23T00:53:52,060 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8341/14.0.13 2025-09-23T00:53:52,061 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8344/14.0.13 2025-09-23T00:53:52,063 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8528/14.0.13 2025-09-23T00:53:52,064 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8529/14.0.13 2025-09-23T00:53:52,066 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8040-ietf-restconf/14.0.13 2025-09-23T00:53:52,067 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8639/14.0.13 2025-09-23T00:53:52,069 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8650/14.0.13 2025-09-23T00:53:52,070 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jspecify.jspecify/1.0.0 2025-09-23T00:53:52,071 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6241/14.0.13 2025-09-23T00:53:52,072 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6470/14.0.13 2025-09-23T00:53:52,072 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.collections/3.2.2 2025-09-23T00:53:52,074 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.geronimo.specs.geronimo-atinject_1.0_spec/1.2.0 2025-09-23T00:53:52,079 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-common-api/14.0.13 2025-09-23T00:53:52,080 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-binding-api/14.0.13 2025-09-23T00:53:52,081 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-api/0.20.0 2025-09-23T00:53:52,081 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | checker-qual/3.49.3 2025-09-23T00:53:52,082 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-api/14.0.14 2025-09-23T00:53:52,082 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-api/14.0.13 2025-09-23T00:53:52,083 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.odlext-model-api/14.0.14 2025-09-23T00:53:52,083 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-spi/14.0.14 2025-09-23T00:53:52,084 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-spi/14.0.13 2025-09-23T00:53:52,084 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-spi/14.0.14 2025-09-23T00:53:52,085 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8528-model-api/14.0.14 2025-09-23T00:53:52,085 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc7952-model-api/14.0.14 2025-09-23T00:53:52,086 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-ir/14.0.14 2025-09-23T00:53:52,086 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-spi/14.0.14 2025-09-23T00:53:52,087 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-util/14.0.14 2025-09-23T00:53:52,087 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-util/14.0.14 2025-09-23T00:53:52,087 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-impl/14.0.14 2025-09-23T00:53:52,088 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | net.bytebuddy.byte-buddy/1.17.5 2025-09-23T00:53:52,091 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-loader/14.0.14 2025-09-23T00:53:52,091 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-dynamic/14.0.14 2025-09-23T00:53:52,097 | INFO | features-3-thread-1 | SimpleBindingDOMCodecFactory | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Binding/DOM Codec enabled 2025-09-23T00:53:52,098 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-api/14.0.14 2025-09-23T00:53:52,101 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-spi/14.0.14 2025-09-23T00:53:52,102 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.odlext-parser-support/14.0.14 2025-09-23T00:53:52,102 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.openconfig-model-api/14.0.14 2025-09-23T00:53:52,103 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.openconfig-parser-support/14.0.14 2025-09-23T00:53:52,103 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6241-model-api/14.0.14 2025-09-23T00:53:52,103 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6241-parser-support/14.0.14 2025-09-23T00:53:52,104 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6536-model-api/14.0.14 2025-09-23T00:53:52,104 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6536-parser-support/14.0.14 2025-09-23T00:53:52,104 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6643-model-api/14.0.14 2025-09-23T00:53:52,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6643-parser-support/14.0.14 2025-09-23T00:53:52,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-ri/14.0.14 2025-09-23T00:53:52,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc7952-parser-support/14.0.14 2025-09-23T00:53:52,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8040-parser-support/14.0.14 2025-09-23T00:53:52,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8528-parser-support/14.0.14 2025-09-23T00:53:52,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8639-model-api/14.0.14 2025-09-23T00:53:52,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8639-parser-support/14.0.14 2025-09-23T00:53:52,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8819-model-api/14.0.14 2025-09-23T00:53:52,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8819-parser-support/14.0.14 2025-09-23T00:53:52,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-spi/14.0.14 2025-09-23T00:53:52,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.antlr.antlr4-runtime/4.13.2 2025-09-23T00:53:52,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-reactor/14.0.14 2025-09-23T00:53:52,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-rfc7950/14.0.14 2025-09-23T00:53:52,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-xpath-impl/14.0.14 2025-09-23T00:53:52,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-impl/14.0.14 2025-09-23T00:53:52,117 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-spi/14.0.14 2025-09-23T00:53:52,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-generator/14.0.14 2025-09-23T00:53:52,124 | INFO | features-3-thread-1 | DefaultBindingRuntimeGenerator | 328 - org.opendaylight.yangtools.binding-generator - 14.0.14 | Binding/YANG type support activated 2025-09-23T00:53:52,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-osgi/14.0.14 2025-09-23T00:53:52,134 | INFO | features-3-thread-1 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activating 2025-09-23T00:53:52,135 | INFO | features-3-thread-1 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activated 2025-09-23T00:53:52,140 | INFO | features-3-thread-1 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime starting 2025-09-23T00:53:52,173 | INFO | features-3-thread-1 | KarafFeaturesSupport | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Will attempt to integrate with Karaf FeaturesService 2025-09-23T00:53:52,801 | INFO | features-3-thread-1 | NettyTransportSupport | 284 - org.opendaylight.netconf.transport-api - 9.0.0 | Netty transport backed by epoll(2) 2025-09-23T00:53:53,029 | INFO | features-3-thread-1 | SharedEffectiveModelContextFactory | 379 - org.opendaylight.yangtools.yang-parser-impl - 14.0.14 | Using weak references 2025-09-23T00:53:54,876 | INFO | features-3-thread-1 | OSGiModuleInfoSnapshotImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | EffectiveModelContext generation 1 activated 2025-09-23T00:53:55,518 | INFO | features-3-thread-1 | OSGiBindingRuntimeContextImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | BindingRuntimeContext generation 1 activated 2025-09-23T00:53:55,519 | INFO | features-3-thread-1 | GlobalBindingRuntimeContext | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Global BindingRuntimeContext generation 1 activated 2025-09-23T00:53:55,519 | INFO | features-3-thread-1 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime started 2025-09-23T00:53:55,519 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-osgi/14.0.14 2025-09-23T00:53:55,528 | INFO | features-3-thread-1 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activating 2025-09-23T00:53:55,547 | INFO | features-3-thread-1 | OSGiBindingDOMCodecServicesImpl | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec generation 1 activated 2025-09-23T00:53:55,547 | INFO | features-3-thread-1 | GlobalBindingDOMCodecServices | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Global Binding/DOM Codec activated with generation 1 2025-09-23T00:53:55,549 | INFO | features-3-thread-1 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activated 2025-09-23T00:53:55,550 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding-dom-adapter/14.0.13 2025-09-23T00:53:55,565 | INFO | features-3-thread-1 | OSGiBlockingBindingNormalizer | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter activated 2025-09-23T00:53:55,572 | INFO | features-3-thread-1 | DynamicBindingAdapter | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | 8 DOMService trackers started 2025-09-23T00:53:55,573 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-impl/0.20.0 2025-09-23T00:53:55,578 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-common-util/11.0.0 2025-09-23T00:53:55,583 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-common-api/14.0.13 2025-09-23T00:53:55,584 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.general-entity/14.0.13 2025-09-23T00:53:55,584 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-binding-api/14.0.13 2025-09-23T00:53:55,585 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-export/14.0.14 2025-09-23T00:53:55,585 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-public-key-algs/14.0.13 2025-09-23T00:53:55,586 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-encryption-algs/14.0.13 2025-09-23T00:53:55,586 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-key-exchange-algs/14.0.13 2025-09-23T00:53:55,586 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-mac-algs/14.0.13 2025-09-23T00:53:55,587 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9640/14.0.13 2025-09-23T00:53:55,587 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9642/14.0.13 2025-09-23T00:53:55,587 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-common/14.0.13 2025-09-23T00:53:55,588 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javassist/3.30.2.GA 2025-09-23T00:53:55,589 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8342-ietf-datastores/14.0.13 2025-09-23T00:53:55,589 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6243/14.0.13 2025-09-23T00:53:55,589 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc7952/14.0.13 2025-09-23T00:53:55,590 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8342-ietf-origin/14.0.13 2025-09-23T00:53:55,590 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8526/14.0.13 2025-09-23T00:53:55,591 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.validation.jakarta.validation-api/2.0.2 2025-09-23T00:53:55,592 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.typesafe.config/1.4.3 2025-09-23T00:53:55,592 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.typesafe.sslconfig/0.6.1 2025-09-23T00:53:55,593 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.agrona.core/1.15.2 2025-09-23T00:53:55,593 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.aeron.client/1.38.1 2025-09-23T00:53:55,594 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.aeron.driver/1.38.1 2025-09-23T00:53:55,594 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap_file__tmp_karaf-0.23.0_system_org_lmdbjava_lmdbjava_0.7.0_lmdbjava-0.7.0.jar/0.0.0 2025-09-23T00:53:55,595 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | reactive-streams/1.0.4 2025-09-23T00:53:55,598 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.repackaged-pekko/11.0.0 2025-09-23T00:53:55,601 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.ietf-topology/2013.10.21.26_13 2025-09-23T00:53:55,602 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.yang-ext/2013.9.7.26_13 2025-09-23T00:53:55,603 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.inventory/0.20.0 2025-09-23T00:53:55,604 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.opendaylight-l2-types/2013.8.27.26_13 2025-09-23T00:53:55,605 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.codegen-extensions/14.0.14 2025-09-23T00:53:55,606 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-api/0.20.0 2025-09-23T00:53:55,607 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-base/0.20.0 2025-09-23T00:53:55,608 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-service/0.20.0 2025-09-23T00:53:55,609 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-statistics/0.20.0 2025-09-23T00:53:55,610 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.util/1.1.3 2025-09-23T00:53:55,611 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.gson/2.13.1 2025-09-23T00:53:55,612 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.h2database/2.3.232 2025-09-23T00:53:55,623 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-binfmt/14.0.14 2025-09-23T00:53:55,623 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-access-api/11.0.0 2025-09-23T00:53:55,624 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.resolver/4.2.2.Final 2025-09-23T00:53:55,624 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport/4.2.2.Final 2025-09-23T00:53:55,625 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-cluster-admin-api/11.0.0 2025-09-23T00:53:55,625 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.netconf-api/9.0.0 2025-09-23T00:53:55,625 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.dom-api/9.0.0 2025-09-23T00:53:55,626 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-base/4.2.2.Final 2025-09-23T00:53:55,627 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | lz4-java/1.8.0 2025-09-23T00:53:55,628 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-compression/4.2.2.Final 2025-09-23T00:53:55,628 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport-native-unix-common/4.2.2.Final 2025-09-23T00:53:55,628 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.handler/4.2.2.Final 2025-09-23T00:53:55,629 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-http/4.2.2.Final 2025-09-23T00:53:55,629 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.jmx/4.2.32 2025-09-23T00:53:55,630 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.scala-reflect/2.13.16.v20250107-233423-VFINAL-3f6bdae 2025-09-23T00:53:55,630 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9641/14.0.13 2025-09-23T00:53:55,631 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-client/14.0.13 2025-09-23T00:53:55,631 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.commons-beanutils/1.11.0 2025-09-23T00:53:55,631 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.owasp.encoder/1.3.1 2025-09-23T00:53:55,632 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.repackaged-shiro/0.21.0 2025-09-23T00:53:55,633 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.sal-remote/9.0.0 2025-09-23T00:53:55,634 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.util/7.1.4 2025-09-23T00:53:55,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-singleton-api/14.0.13 2025-09-23T00:53:55,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.ready-api/7.1.4 2025-09-23T00:53:55,636 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-api/7.1.4 2025-09-23T00:53:55,637 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-binding-spi/14.0.13 2025-09-23T00:53:55,637 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport-classes-epoll/4.2.2.Final 2025-09-23T00:53:55,638 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-spi/0.20.0 2025-09-23T00:53:55,638 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.util/0.20.0 2025-09-23T00:53:55,638 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-common-netty/14.0.14 2025-09-23T00:53:55,639 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.odlparent.bundles-diag/14.1.0 2025-09-23T00:53:55,642 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.ready-impl/7.1.4 2025-09-23T00:53:55,652 | INFO | features-3-thread-1 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | ThreadFactory created: SystemReadyService 2025-09-23T00:53:55,653 | INFO | features-3-thread-1 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | Now starting to provide full system readiness status updates (see TestBundleDiag's logs)... 2025-09-23T00:53:55,656 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-impl/7.1.4 2025-09-23T00:53:55,657 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos() started... 2025-09-23T00:53:55,663 | INFO | features-3-thread-1 | DiagStatusServiceImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service started 2025-09-23T00:53:55,667 | INFO | features-3-thread-1 | MBeanUtils | 198 - org.opendaylight.infrautils.diagstatus-api - 7.1.4 | MBean registration for org.opendaylight.infrautils.diagstatus:type=SvcStatus SUCCESSFUL. 2025-09-23T00:53:55,667 | INFO | features-3-thread-1 | DiagStatusServiceMBeanImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service management started 2025-09-23T00:53:55,668 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl/0.20.0 2025-09-23T00:53:55,673 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.common/0.20.0 2025-09-23T00:53:55,673 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.extension-api/0.20.0 2025-09-23T00:53:55,674 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin/0.20.0 2025-09-23T00:53:55,674 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-dom-api/14.0.13 2025-09-23T00:53:55,674 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-api/11.0.0 2025-09-23T00:53:55,675 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-spi/11.0.0 2025-09-23T00:53:55,675 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.api/0.21.0 2025-09-23T00:53:55,675 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.crypt-hash/14.0.13 2025-09-23T00:53:55,676 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-server/14.0.13 2025-09-23T00:53:55,676 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.googlecode.json-simple/1.1.1 2025-09-23T00:53:55,676 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.shaded-sshd/9.0.0 2025-09-23T00:53:55,677 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-http2/4.2.2.Final 2025-09-23T00:53:55,677 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.commons-codec/1.15.0 2025-09-23T00:53:55,677 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-api/9.0.0 2025-09-23T00:53:55,678 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-common/14.0.13 2025-09-23T00:53:55,678 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-client/14.0.13 2025-09-23T00:53:55,678 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-server/14.0.13 2025-09-23T00:53:55,678 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-tcp/9.0.0 2025-09-23T00:53:55,679 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.tls-cipher-suite-algs/14.0.13 2025-09-23T00:53:55,679 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-common/14.0.13 2025-09-23T00:53:55,707 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-client/14.0.13 2025-09-23T00:53:55,708 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-server/14.0.13 2025-09-23T00:53:55,708 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-tls/9.0.0 2025-09-23T00:53:55,708 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-http/9.0.0 2025-09-23T00:53:55,709 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc7407-ietf-x509-cert-to-name/14.0.13 2025-09-23T00:53:55,709 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.draft-ietf-restconf-server/9.0.0 2025-09-23T00:53:55,710 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap_file__tmp_karaf-0.23.0_system_net_java_dev_stax-utils_stax-utils_20070216_stax-utils-20070216.jar/0.0.0 2025-09-23T00:53:55,710 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.text/1.13.0 2025-09-23T00:53:55,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-access-client/11.0.0 2025-09-23T00:53:55,711 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-mgmt-api/11.0.0 2025-09-23T00:53:55,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-journal/11.0.0 2025-09-23T00:53:55,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.authn-api/0.21.0 2025-09-23T00:53:55,712 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.password-service-api/0.21.0 2025-09-23T00:53:55,713 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.shiro-api/0.21.0 2025-09-23T00:53:55,713 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.encrypt-service/0.21.0 2025-09-23T00:53:55,714 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.cert/0.21.0 2025-09-23T00:53:55,737 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-09-23T00:53:55,739 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.atomix-storage/11.0.0 2025-09-23T00:53:55,740 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-akka-segmented-journal/11.0.0 2025-09-23T00:53:55,740 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.bulk-o-matic/0.20.0 2025-09-23T00:53:55,742 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.rabbitmq.client/5.25.0 2025-09-23T00:53:55,743 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-ri/14.0.14 2025-09-23T00:53:55,744 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jdbc.core/4.4.7 2025-09-23T00:53:55,755 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jdbc.core/4.4.7 2025-09-23T00:53:55,755 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-api/9.0.0 2025-09-23T00:53:55,756 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-gson/14.0.14 2025-09-23T00:53:55,757 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-xml/14.0.14 2025-09-23T00:53:55,757 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.databind/9.0.0 2025-09-23T00:53:55,759 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8072/14.0.13 2025-09-23T00:53:55,760 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-api/9.0.0 2025-09-23T00:53:55,760 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.netconf-common-mdsal/9.0.0 2025-09-23T00:53:55,760 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-spi/9.0.0 2025-09-23T00:53:55,761 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-mdsal-spi/9.0.0 2025-09-23T00:53:55,761 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8040-ietf-restconf-monitoring/14.0.13 2025-09-23T00:53:55,762 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-schema-osgi/14.0.13 2025-09-23T00:53:55,768 | INFO | features-3-thread-1 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | DOM Schema services activated 2025-09-23T00:53:55,768 | INFO | features-3-thread-1 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | Updating context to generation 1 2025-09-23T00:53:55,769 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-broker/14.0.13 2025-09-23T00:53:55,781 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for MountPointService activated 2025-09-23T00:53:55,790 | INFO | features-3-thread-1 | DOMRpcRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM RPC/Action router started 2025-09-23T00:53:55,793 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionProviderService activated 2025-09-23T00:53:55,796 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionService activated 2025-09-23T00:53:55,801 | INFO | features-3-thread-1 | DOMNotificationRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM Notification Router started 2025-09-23T00:53:55,803 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationService activated 2025-09-23T00:53:55,806 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationPublishService activated 2025-09-23T00:53:55,809 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcProviderService activated 2025-09-23T00:53:55,809 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-09-23T00:53:55,812 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcService activated 2025-09-23T00:53:55,812 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.servlet-api/0.21.0 2025-09-23T00:53:55,890 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8525/14.0.13 2025-09-23T00:53:55,890 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-singleton-impl/14.0.13 2025-09-23T00:53:55,892 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.yanglib-mdsal-writer/9.0.0 2025-09-23T00:53:55,893 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.osgi-impl/0.21.0 2025-09-23T00:53:55,895 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-client/2.47.0 2025-09-23T00:53:55,896 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.containers.jersey-container-servlet-core/2.47.0 2025-09-23T00:53:55,896 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.servlet-jersey2/0.21.0 2025-09-23T00:53:55,899 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-jaxrs/9.0.0 2025-09-23T00:53:55,904 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-subscription/9.0.0 2025-09-23T00:53:55,906 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.sal-remote-impl/9.0.0 2025-09-23T00:53:55,908 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.odl-device-notification/9.0.0 2025-09-23T00:53:55,909 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-mdsal/9.0.0 2025-09-23T00:53:55,913 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server/9.0.0 2025-09-23T00:53:55,914 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-nb/9.0.0 2025-09-23T00:53:55,917 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.device-ownership-service/0.20.0 2025-09-23T00:53:55,919 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 2025-09-23T00:53:55,924 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:55,925 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.arbitratorreconciliation-api/0.20.0 2025-09-23T00:53:55,925 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl/0.20.0 2025-09-23T00:53:55,927 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.keystore-none/9.0.0 2025-09-23T00:53:55,928 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.topology/0.20.0 2025-09-23T00:53:55,928 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.rfc5277/9.0.0 2025-09-23T00:53:55,928 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-fs/14.0.14 2025-09-23T00:53:55,929 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.extension-onf/0.20.0 2025-09-23T00:53:55,930 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.encrypt-service-impl/0.21.0 2025-09-23T00:53:55,932 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-09-23T00:53:55,933 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-09-23T00:53:55,933 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.blueprint-config/0.20.0 2025-09-23T00:53:55,935 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding-util/14.0.13 2025-09-23T00:53:55,935 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-ssh/9.0.0 2025-09-23T00:53:55,936 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-transform/14.0.14 2025-09-23T00:53:55,936 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.of-switch-config-pusher/0.20.0 2025-09-23T00:53:55,937 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-09-23T00:53:55,939 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0. Missing service: [org.opendaylight.serviceutils.srm.spi.RegistryControl, org.opendaylight.mdsal.binding.api.DataBroker] 2025-09-23T00:53:55,939 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.topology-manager/0.20.0 2025-09-23T00:53:55,942 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.truststore-none/9.0.0 2025-09-23T00:53:55,943 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.inject.jersey-hk2/2.47.0 2025-09-23T00:53:55,943 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.tokenauthrealm/0.21.0 2025-09-23T00:53:55,945 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.containers.jersey-container-servlet/2.47.0 2025-09-23T00:53:55,946 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.jetty-auth-log-filter/0.21.0 2025-09-23T00:53:55,946 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.password-service-impl/0.21.0 2025-09-23T00:53:55,949 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.idm-store-h2/0.21.0 2025-09-23T00:53:55,950 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-dom-api/11.0.0 2025-09-23T00:53:55,951 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.graphite/4.2.32 2025-09-23T00:53:55,951 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.media.jersey-media-sse/2.47.0 2025-09-23T00:53:55,951 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.locator/2.6.1 2025-09-23T00:53:55,952 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.activation-api/1.2.2 2025-09-23T00:53:55,953 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-common/2.47.0 2025-09-23T00:53:55,954 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.utils/2.6.1 2025-09-23T00:53:55,954 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.api/2.6.1 2025-09-23T00:53:55,956 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.filterchain/0.21.0 2025-09-23T00:53:55,960 | INFO | features-3-thread-1 | CustomFilterAdapterConfigurationImpl | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Custom filter properties updated: {service.pid=org.opendaylight.aaa.filterchain, osgi.ds.satisfying.condition.target=(osgi.condition.id=true), customFilterList=, component.name=org.opendaylight.aaa.filterchain.configuration.impl.CustomFilterAdapterConfigurationImpl, felix.fileinstall.filename=file:/tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg, component.id=99, Filter.target=(org.opendaylight.aaa.filterchain.filter=true)} 2025-09-23T00:53:55,960 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.shiro/0.21.0 2025-09-23T00:53:55,965 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T00:53:55,979 | INFO | features-3-thread-1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.aaa.shiro_0.21.0 [172]] 2025-09-23T00:53:55,981 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-09-23T00:53:55,981 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]}", size=1} 2025-09-23T00:53:55,981 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-09-23T00:53:55,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-akka-raft/11.0.0 2025-09-23T00:53:55,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-distributed-datastore/11.0.0 2025-09-23T00:53:55,996 | INFO | features-3-thread-1 | OSGiDatastoreContextIntrospectorFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore Context Introspector activated 2025-09-23T00:53:55,999 | INFO | features-3-thread-1 | FileModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Shard configuration provider started 2025-09-23T00:53:55,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-remoterpc-connector/11.0.0 2025-09-23T00:53:56,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-clustering-commons/11.0.0 2025-09-23T00:53:56,011 | INFO | features-3-thread-1 | FileAkkaConfigurationReader | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | File-based Pekko configuration reader enabled 2025-09-23T00:53:56,012 | INFO | features-3-thread-1 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider starting 2025-09-23T00:53:56,190 | INFO | features-3-thread-1 | ActorSystemProviderImpl | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating new ActorSystem 2025-09-23T00:53:56,470 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Slf4jLogger | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Slf4jLogger started 2025-09-23T00:53:56,697 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | ArteryTransport | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Remoting started with transport [Artery tcp]; listening on address [pekko://opendaylight-cluster-data@10.30.170.97:2550] with UID [-1955037012415763614] 2025-09-23T00:53:56,709 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Starting up, Pekko version [1.0.3] ... 2025-09-23T00:53:56,762 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Registered cluster JMX MBean [pekko:type=Cluster] 2025-09-23T00:53:56,762 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Started up successfully 2025-09-23T00:53:56,812 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR started. Config: strategy [KeepMajority], stable-after [7 seconds], down-all-when-unstable [5250 milliseconds], selfUniqueAddress [pekko://opendaylight-cluster-data@10.30.170.97:2550#-1955037012415763614], selfDc [default]. 2025-09-23T00:53:57,047 | INFO | features-3-thread-1 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider started 2025-09-23T00:53:57,054 | INFO | features-3-thread-1 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION starting 2025-09-23T00:53:57,123 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.150:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.150/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T00:53:57,123 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.150:2550], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.150/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T00:53:57,129 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T00:53:57,130 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T00:53:57,330 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : config 2025-09-23T00:53:57,331 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T00:53:57,332 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T00:53:57,337 | INFO | features-3-thread-1 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-config 2025-09-23T00:53:57,401 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-config 2025-09-23T00:53:57,414 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-31 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage MAPPED 2025-09-23T00:53:57,469 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Recovery complete 2025-09-23T00:53:57,475 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RecoveringClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: saving tombstone ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=0} 2025-09-23T00:53:57,536 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store config is using tell-based protocol 2025-09-23T00:53:57,539 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T00:53:57,540 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T00:53:57,540 | INFO | features-3-thread-1 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL starting 2025-09-23T00:53:57,544 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : operational 2025-09-23T00:53:57,544 | INFO | features-3-thread-1 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-operational 2025-09-23T00:53:57,558 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-operational 2025-09-23T00:53:57,559 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-topology-config: Shard created, persistent : true 2025-09-23T00:53:57,559 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-config: Shard created, persistent : true 2025-09-23T00:53:57,559 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RecoveringClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: saving tombstone ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0} 2025-09-23T00:53:57,562 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Shard created, persistent : true 2025-09-23T00:53:57,562 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store operational is using tell-based protocol 2025-09-23T00:53:57,567 | INFO | features-3-thread-1 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service starting 2025-09-23T00:53:57,568 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-toaster-config: Shard created, persistent : true 2025-09-23T00:53:57,573 | INFO | features-3-thread-1 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service started 2025-09-23T00:53:57,581 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Recovery complete 2025-09-23T00:53:57,582 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.eos-dom-akka/11.0.0 2025-09-23T00:53:57,584 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Shard created, persistent : false 2025-09-23T00:53:57,585 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-topology-operational: Shard created, persistent : false 2025-09-23T00:53:57,585 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-toaster-operational: Shard created, persistent : false 2025-09-23T00:53:57,586 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-operational: Shard created, persistent : false 2025-09-23T00:53:57,594 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-inventory-operational/member-1-shard-inventory-operational-notifier#1530597767 created and ready for shard:member-1-shard-inventory-operational 2025-09-23T00:53:57,595 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-topology-operational/member-1-shard-topology-operational-notifier#-156450367 created and ready for shard:member-1-shard-topology-operational 2025-09-23T00:53:57,595 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-toaster-config/member-1-shard-toaster-config-notifier#1970551781 created and ready for shard:member-1-shard-toaster-config 2025-09-23T00:53:57,596 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-toaster-operational/member-1-shard-toaster-operational-notifier#-1653879850 created and ready for shard:member-1-shard-toaster-operational 2025-09-23T00:53:57,596 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-topology-config/member-1-shard-topology-config-notifier#-2050602688 created and ready for shard:member-1-shard-topology-config 2025-09-23T00:53:57,596 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-inventory-config/member-1-shard-inventory-config-notifier#1630707065 created and ready for shard:member-1-shard-inventory-config 2025-09-23T00:53:57,597 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Starting recovery with journal batch size 1 2025-09-23T00:53:57,597 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-default-config/member-1-shard-default-config-notifier#-1730090205 created and ready for shard:member-1-shard-default-config 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Starting recovery with journal batch size 1 2025-09-23T00:53:57,598 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Starting recovery with journal batch size 1 2025-09-23T00:53:57,603 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-default-operational/member-1-shard-default-operational-notifier#-773138218 created and ready for shard:member-1-shard-default-operational 2025-09-23T00:53:57,624 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-47 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage DISK 2025-09-23T00:53:57,676 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: journal open: applyTo=0 2025-09-23T00:53:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: journal open: applyTo=0 2025-09-23T00:53:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: journal open: applyTo=0 2025-09-23T00:53:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: journal open: applyTo=0 2025-09-23T00:53:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: journal open: applyTo=0 2025-09-23T00:53:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: journal open: applyTo=0 2025-09-23T00:53:57,678 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: journal open: applyTo=0 2025-09-23T00:53:57,679 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: journal open: applyTo=0 2025-09-23T00:53:57,708 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,708 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,709 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,709 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,711 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,712 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,713 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,716 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,718 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,718 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,718 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,718 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,718 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,719 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,720 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.eos-binding-adapter/14.0.13 2025-09-23T00:53:57,723 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from null to Follower 2025-09-23T00:53:57,723 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , received role change from null to Follower 2025-09-23T00:53:57,723 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , received role change from null to Follower 2025-09-23T00:53:57,723 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , received role change from null to Follower 2025-09-23T00:53:57,725 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , received role change from null to Follower 2025-09-23T00:53:57,727 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , received role change from null to Follower 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from null to Follower 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-toaster-operational from null to Follower 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T00:53:57,728 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from null to Follower 2025-09-23T00:53:57,729 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-inventory-config from null to Follower 2025-09-23T00:53:57,729 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from null to Follower 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-topology-config from null to Follower 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-default-operational from null to Follower 2025-09-23T00:53:57,730 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-default-config from null to Follower 2025-09-23T00:53:57,734 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,735 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.impl/0.20.0 2025-09-23T00:53:57,736 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-09-23T00:53:57,736 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from null to Follower 2025-09-23T00:53:57,737 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T00:53:57,737 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from null to Follower 2025-09-23T00:53:57,768 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationServiceFactory), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,773 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.impl/0.20.0. Missing service: [org.opendaylight.openflowplugin.api.openflow.statistics.ofpspecific.MessageIntelligenceAgency] 2025-09-23T00:53:57,780 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,786 | INFO | features-3-thread-1 | MessageIntelligenceAgencyImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Registered MBean org.opendaylight.openflowplugin.impl.statistics.ofpspecific:type=MessageIntelligenceAgencyMXBean 2025-09-23T00:53:57,786 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.impl/0.20.0 2025-09-23T00:53:57,787 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-09-23T00:53:57,789 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0. Missing service: [org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager] 2025-09-23T00:53:57,792 | INFO | features-3-thread-1 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | ReconciliationManager started 2025-09-23T00:53:57,794 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-09-23T00:53:57,795 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 2025-09-23T00:53:57,800 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,824 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,824 | INFO | features-3-thread-1 | OpenflowServiceRecoveryHandlerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Registering openflowplugin service recovery handlers 2025-09-23T00:53:57,825 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.ws.rs-api/2.1.6 2025-09-23T00:53:57,827 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-server/2.47.0 2025-09-23T00:53:57,828 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.common/4.2.2.Final 2025-09-23T00:53:57,829 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.buffer/4.2.2.Final 2025-09-23T00:53:57,829 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.api/0.20.0 2025-09-23T00:53:57,830 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 2025-09-23T00:53:57,836 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:53:57,836 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.keystore-api/9.0.0 2025-09-23T00:53:57,837 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.blueprint-config/0.20.0 2025-09-23T00:53:57,838 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-cluster-admin-impl/11.0.0 2025-09-23T00:53:57,840 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.truststore-api/9.0.0 2025-09-23T00:53:57,843 | INFO | features-3-thread-1 | Activator | 99 - org.apache.karaf.deployer.features - 4.4.7 | Deployment finished. Registering FeatureDeploymentListener 2025-09-23T00:53:58,142 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Done. 2025-09-23T00:54:00,733 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#719687795]], but this node is not initialized yet 2025-09-23T00:54:00,764 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon#153627682]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:00,765 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon#153627682]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:00,765 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon#153627682]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:00,765 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon#153627682]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:02,695 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | CandidateRegistryInit | 185 - org.opendaylight.controller.eos-dom-akka - 11.0.0 | member-1 : Initial removal of candidates from previous iteration failed. Rescheduling. java.util.concurrent.TimeoutException: Ask timed out on [Actor[pekko://opendaylight-cluster-data/system/singletonProxyOwnerSupervisor-no-dc#511382855]] after [5000 ms]. Message of type [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. at org.apache.pekko.actor.typed.scaladsl.AskPattern$.$anonfun$onTimeout$1(AskPattern.scala:141) ~[bundleFile:?] at org.apache.pekko.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:737) ~[bundleFile:?] at org.apache.pekko.actor.Scheduler$$anon$7.run(Scheduler.scala:491) ~[bundleFile:?] at scala.concurrent.ExecutionContext$parasitic$.execute(ExecutionContext.scala:222) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:384) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:332) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:336) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:288) ~[bundleFile:?] at java.lang.Thread.run(Thread.java:1583) ~[?:?] 2025-09-23T00:54:04,489 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1864079357]], but this node is not initialized yet 2025-09-23T00:54:04,489 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon#1855382502]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:04,499 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] is JOINING itself (with roles [member-1, dc-default], version [0.0.0]) and forming new cluster 2025-09-23T00:54:04,499 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - is the new leader among reachable nodes (more leaders may exist) 2025-09-23T00:54:04,503 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Up] 2025-09-23T00:54:04,509 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.97:2550 2025-09-23T00:54:04,509 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.97:2550 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-default-config 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-topology-config 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-09-23T00:54:04,510 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-09-23T00:54:04,515 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist). 2025-09-23T00:54:04,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton manager starting singleton actor [pekko://opendaylight-cluster-data/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-09-23T00:54:04,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | ClusterSingletonManager state change [Start -> Oldest] 2025-09-23T00:54:05,523 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-09-23T00:54:05,529 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T00:54:07,784 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Candidate): Starting new election term 1 2025-09-23T00:54:07,785 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,785 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Candidate): Starting new election term 1 2025-09-23T00:54:07,785 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from Follower to Candidate 2025-09-23T00:54:07,785 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,785 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from Follower to Candidate 2025-09-23T00:54:07,786 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , received role change from Follower to Candidate 2025-09-23T00:54:07,786 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-toaster-operational from Follower to Candidate 2025-09-23T00:54:07,799 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Candidate): Starting new election term 1 2025-09-23T00:54:07,800 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,800 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from Follower to Candidate 2025-09-23T00:54:07,801 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from Follower to Candidate 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Candidate): Starting new election term 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate): Starting new election term 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Candidate): Starting new election term 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from Follower to Candidate 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , received role change from Follower to Candidate 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from Follower to Candidate 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , received role change from Follower to Candidate 2025-09-23T00:54:07,802 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-topology-config from Follower to Candidate 2025-09-23T00:54:07,803 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-default-config from Follower to Candidate 2025-09-23T00:54:07,807 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Candidate): Starting new election term 1 2025-09-23T00:54:07,808 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,808 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , received role change from Follower to Candidate 2025-09-23T00:54:07,808 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-default-operational from Follower to Candidate 2025-09-23T00:54:07,817 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Candidate): Starting new election term 1 2025-09-23T00:54:07,817 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-09-23T00:54:07,818 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , received role change from Follower to Candidate 2025-09-23T00:54:07,818 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-inventory-config from Follower to Candidate 2025-09-23T00:54:13,073 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#719687795]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:13,074 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#719687795]] (version [1.0.3]) 2025-09-23T00:54:13,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.171.150:2550] is JOINING, roles [member-3, dc-default], version [0.0.0] 2025-09-23T00:54:13,143 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#-1834561701] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T00:54:13,144 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#658578290] was unhandled. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T00:54:14,086 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.150:2550] to [Up] 2025-09-23T00:54:14,086 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.171.150:2550 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-topology-config 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-3-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-09-23T00:54:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-3-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-topology-config 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-3-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-3-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.171.150:2550 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-3-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-3-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-3-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-09-23T00:54:14,088 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-3-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-09-23T00:54:16,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1864079357]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T00:54:16,520 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1864079357]] (version [1.0.3]) 2025-09-23T00:54:16,570 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.171.168:2550] is JOINING, roles [member-2, dc-default], version [0.0.0] 2025-09-23T00:54:16,570 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#-1834561701] was unhandled. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T00:54:16,571 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#658578290] was unhandled. [5] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T00:54:17,144 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.168:2550] to [Up] 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T00:54:17,145 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T00:54:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T00:54:17,147 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T00:54:17,829 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Candidate): Starting new election term 2 2025-09-23T00:54:17,829 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Candidate): Starting new election term 2 2025-09-23T00:54:17,847 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Candidate): Starting new election term 2 2025-09-23T00:54:17,847 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Candidate): Starting new election term 2 2025-09-23T00:54:17,867 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,868 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,868 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,869 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@c6a0886 2025-09-23T00:54:17,869 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,869 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@449e989a 2025-09-23T00:54:17,869 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , received role change from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@3bd5281a 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@d9b9ad9 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-default-config from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , received role change from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from Candidate to Leader 2025-09-23T00:54:17,870 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-topology-config from Candidate to Leader 2025-09-23T00:54:17,886 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate): Starting new election term 2 2025-09-23T00:54:17,890 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Candidate): Starting new election term 2 2025-09-23T00:54:17,897 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Candidate): Starting new election term 2 2025-09-23T00:54:17,900 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,900 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from Candidate to Leader 2025-09-23T00:54:17,900 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@de33de0 2025-09-23T00:54:17,900 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from Candidate to Leader 2025-09-23T00:54:17,904 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,904 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , received role change from Candidate to Leader 2025-09-23T00:54:17,904 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@9d9bdb1 2025-09-23T00:54:17,904 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-default-operational from Candidate to Leader 2025-09-23T00:54:17,905 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Candidate): Starting new election term 2 2025-09-23T00:54:17,909 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,910 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , received role change from Candidate to Leader 2025-09-23T00:54:17,910 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@2c313b6a 2025-09-23T00:54:17,910 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-inventory-config from Candidate to Leader 2025-09-23T00:54:17,910 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-09-23T00:54:17,915 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type CONFIGURATION activated 2025-09-23T00:54:17,915 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION started 2025-09-23T00:54:17,917 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 2 2025-09-23T00:54:17,917 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , received role change from Candidate to Leader 2025-09-23T00:54:17,917 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@162ba39 2025-09-23T00:54:17,917 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-toaster-operational from Candidate to Leader 2025-09-23T00:54:17,918 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-09-23T00:54:17,935 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ConcurrentDOMDataBroker | 358 - org.opendaylight.yangtools.util - 14.0.14 | ThreadFactory created: CommitFutures 2025-09-23T00:54:17,938 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | DataBrokerCommitExecutor | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker commit exector started 2025-09-23T00:54:17,939 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type OPERATIONAL activated 2025-09-23T00:54:17,941 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ConcurrentDOMDataBroker | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker started 2025-09-23T00:54:17,948 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for DataBroker activated 2025-09-23T00:54:18,000 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [Initial app config AaaCertServiceConfig, (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-09-23T00:54:18,004 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-1-shard-default-config#-1992517045], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present} 2025-09-23T00:54:18,006 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=0}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-1-shard-default-config#-1992517045], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} 2025-09-23T00:54:18,017 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=0}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-1-shard-default-config#-1992517045], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} in 11.83 ms 2025-09-23T00:54:18,027 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | EOSClusterSingletonServiceProvider | 257 - org.opendaylight.mdsal.mdsal-singleton-impl - 14.0.13 | Cluster Singleton Service started 2025-09-23T00:54:18,033 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer registered 2025-09-23T00:54:18,118 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-09-23T00:54:18,179 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ArbitratorReconciliationManagerImpl | 296 - org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl - 0.20.0 | ArbitratorReconciliationManager has started successfully. 2025-09-23T00:54:18,194 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-default-operational#-365535535], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present} 2025-09-23T00:54:18,194 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-default-operational#-365535535], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} 2025-09-23T00:54:18,201 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-default-operational#-365535535], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} in 6.858 ms 2025-09-23T00:54:18,214 | INFO | Blueprint Extender: 3 | LLDPSpeaker | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | LLDPSpeaker started, it will send LLDP frames each 5 seconds 2025-09-23T00:54:18,306 | WARN | opendaylight-cluster-data-shard-dispatcher-38 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-config: Store Tx member-3-datastore-config-fe-0-txn-1-0: Conflicting modification for path /(config:aaa:authn:encrypt:service:config?revision=2024-02-02)aaa-encrypt-service-config. 2025-09-23T00:54:18,312 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | MD-SAL configuration-based SwitchConnectionProviders started 2025-09-23T00:54:18,323 | INFO | CommitFutures-1 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Configuration update succeeded 2025-09-23T00:54:18,325 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | DeviceOwnershipService started 2025-09-23T00:54:18,346 | WARN | opendaylight-cluster-data-shard-dispatcher-41 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Store Tx member-2-datastore-operational-fe-0-txn-0-0: Conflicting modification for path /(urn:opendaylight:serviceutils:upgrade?revision=2018-07-02)upgrade-config. 2025-09-23T00:54:18,362 | INFO | Blueprint Extender: 3 | NodeConnectorInventoryEventTranslator | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | NodeConnectorInventoryEventTranslator has started. 2025-09-23T00:54:18,364 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 has been started 2025-09-23T00:54:18,364 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.lldp-speaker_0.20.0 [300] was successfully created 2025-09-23T00:54:18,369 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | DefaultConfigPusher | 301 - org.opendaylight.openflowplugin.applications.of-switch-config-pusher - 0.20.0 | DefaultConfigPusher has started. 2025-09-23T00:54:18,370 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-09-23T00:54:18,432 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | FlowCapableTopologyProvider | 304 - org.opendaylight.openflowplugin.applications.topology-manager - 0.20.0 | Topology Manager service started. 2025-09-23T00:54:18,451 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-default-impl' 2025-09-23T00:54:18,500 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiPasswordServiceConfigBootstrap | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | Listening for password service configuration 2025-09-23T00:54:18,502 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T00:54:18,505 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T00:54:18,508 | ERROR | opendaylight-cluster-data-notification-dispatcher-50 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | bundle org.opendaylight.aaa.idm-store-h2:0.21.0 (167)[org.opendaylight.aaa.datastore.h2.H2Store(98)] : Constructor argument 0 in class class org.opendaylight.aaa.datastore.h2.H2Store has unsupported type org.opendaylight.aaa.datastore.h2.ConnectionProvider 2025-09-23T00:54:18,516 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default iteration count=20000 2025-09-23T00:54:18,516 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default algorithm=SHA-512 2025-09-23T00:54:18,516 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will not utilize a private salt, since none was configured 2025-09-23T00:54:18,549 | INFO | Blueprint Extender: 2 | LazyBindingList | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for lists larger than 16 element(s) 2025-09-23T00:54:18,551 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | H2 IDMStore activated 2025-09-23T00:54:18,558 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T00:54:18,559 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:54:18,561 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T00:54:18,589 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-09-23T00:54:18,613 | INFO | Blueprint Extender: 2 | AaaCertMdsalProvider | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCertMdsalProvider Initialized 2025-09-23T00:54:18,666 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-legacy-impl' 2025-09-23T00:54:18,670 | INFO | Blueprint Extender: 2 | MdsalUtils | 163 - org.opendaylight.aaa.cert - 0.21.0 | initDatastore: data populated: CONFIGURATION, DataObjectIdentifier[ @ urn.opendaylight.yang.aaa.cert.mdsal.rev160321.KeyStores ], KeyStores{id=KeyStores:1} 2025-09-23T00:54:18,689 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T00:54:18,690 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T00:54:18,690 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), Initial app config TopologyLldpDiscoveryConfig] 2025-09-23T00:54:18,720 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading properties from '(urn:opendaylight:params:xml:ns:yang:openflow:provider:config?revision=2016-05-10)openflow-provider-config' YANG file 2025-09-23T00:54:18,725 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | rpc-requests-quota configuration property was changed to '20000' 2025-09-23T00:54:18,725 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | global-notification-quota configuration property was changed to '64000' 2025-09-23T00:54:18,726 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | switch-features-mandatory configuration property was changed to 'false' 2025-09-23T00:54:18,726 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | enable-flow-removed-notification configuration property was changed to 'true' 2025-09-23T00:54:18,726 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-rpc-enabled configuration property was changed to 'false' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-count-limit configuration property was changed to '25600' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-interval-timeout-limit configuration property was changed to '500' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | echo-reply-timeout configuration property was changed to '2000' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-table-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-flow-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-group-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-meter-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-port-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-queue-statistics-polling-on configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | skip-table-features configuration property was changed to 'true' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | basic-timer-delay configuration property was changed to '3000' 2025-09-23T00:54:18,727 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | maximum-timer-delay configuration property was changed to '900000' 2025-09-23T00:54:18,733 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | use-single-layer-serialization configuration property was changed to 'true' 2025-09-23T00:54:18,733 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-min-threads configuration property was changed to '1' 2025-09-23T00:54:18,735 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-max-threads configuration property was changed to '32000' 2025-09-23T00:54:18,735 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-timeout configuration property was changed to '60' 2025-09-23T00:54:18,735 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-rate-limit-per-min configuration property was changed to '0' 2025-09-23T00:54:18,735 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-hold-time-in-seconds configuration property was changed to '0' 2025-09-23T00:54:18,736 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-datastore-removal-delay configuration property was changed to '500' 2025-09-23T00:54:18,736 | INFO | Blueprint Extender: 1 | OSGiConfigurationServiceFactory | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading configuration from 'org.opendaylight.openflowplugin' configuration file 2025-09-23T00:54:18,736 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | YangLibraryWriter | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer started with modules-state enabled 2025-09-23T00:54:18,741 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T00:54:18,758 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | felix.fileinstall.filename configuration property was changed to 'file:/tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg' 2025-09-23T00:54:18,758 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | service.pid configuration property was changed to 'org.opendaylight.openflowplugin' 2025-09-23T00:54:18,803 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | AAAEncryptionServiceImpl | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | AAAEncryptionService activated 2025-09-23T00:54:18,804 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Encryption Service enabled 2025-09-23T00:54:18,819 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] 2025-09-23T00:54:18,829 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] already present 2025-09-23T00:54:18,834 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] 2025-09-23T00:54:18,834 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] already present 2025-09-23T00:54:18,879 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiClusterAdmin | 193 - org.opendaylight.controller.sal-cluster-admin-impl - 11.0.0 | Cluster Admin services started 2025-09-23T00:54:18,887 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL started 2025-09-23T00:54:18,981 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-inventory-operational#742131242], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=present} 2025-09-23T00:54:18,982 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-inventory-operational#742131242], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=present}} 2025-09-23T00:54:18,983 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-inventory-operational#742131242], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=present}} in 614.1 μs 2025-09-23T00:54:19,007 | INFO | Blueprint Extender: 1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | OpenFlowPluginProvider started, waiting for onSystemBootReady() 2025-09-23T00:54:19,008 | INFO | Blueprint Extender: 1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@1802e3c6 2025-09-23T00:54:19,008 | INFO | Blueprint Extender: 1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@6e650ed5 2025-09-23T00:54:19,016 | INFO | Blueprint Extender: 1 | OnfExtensionProvider | 308 - org.opendaylight.openflowplugin.extension-onf - 0.20.0 | ONF Extension Provider started. 2025-09-23T00:54:19,018 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 has been started 2025-09-23T00:54:19,018 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.impl_0.20.0 [309] was successfully created 2025-09-23T00:54:19,060 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.topology.lldp.LLDPLinkAger@7ff20ba5 was registered as configuration listener to OpenFlowPlugin configuration service 2025-09-23T00:54:19,076 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-09-23T00:54:19,077 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T00:54:19,080 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.frm.impl.ForwardingRulesManagerImpl@77310e8 was registered as configuration listener to OpenFlowPlugin configuration service 2025-09-23T00:54:19,082 | INFO | Blueprint Extender: 2 | ODLKeyTool | 163 - org.opendaylight.aaa.cert - 0.21.0 | ctl.jks is created 2025-09-23T00:54:19,127 | INFO | Blueprint Extender: 1 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | Starting LLDPActivator with lldpSecureKey: aa9251f8-c7c0-4322-b8d6-c3a84593bda3 2025-09-23T00:54:19,128 | INFO | Blueprint Extender: 1 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | LLDPDiscoveryListener started. 2025-09-23T00:54:19,131 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 has been started 2025-09-23T00:54:19,132 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery_0.20.0 [303] was successfully created 2025-09-23T00:54:19,161 | INFO | Blueprint Extender: 2 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | Certificate Manager service has been initialized 2025-09-23T00:54:19,167 | INFO | Blueprint Extender: 2 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCert Rpc Service has been initialized 2025-09-23T00:54:19,170 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 has been started 2025-09-23T00:54:19,179 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.cert_0.21.0 [163] was successfully created 2025-09-23T00:54:19,195 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Checking if default entries must be created in IDM store 2025-09-23T00:54:19,230 | INFO | Blueprint Extender: 3 | ForwardingRulesManagerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | ForwardingRulesManager has started successfully. 2025-09-23T00:54:19,237 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 has been started 2025-09-23T00:54:19,237 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager_0.20.0 [299] was successfully created 2025-09-23T00:54:19,392 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_DOMAINS does not exist, creating it 2025-09-23T00:54:19,510 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created default domain 2025-09-23T00:54:19,516 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_ROLES does not exist, creating it 2025-09-23T00:54:19,618 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created 'admin' role 2025-09-23T00:54:19,660 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created 'user' role 2025-09-23T00:54:19,780 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_USERS does not exist, creating it 2025-09-23T00:54:19,795 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Store Tx member-3-datastore-operational-fe-0-txn-3-0: Conflicting modification for path /(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)streams/stream/stream[{(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)name=NETCONF}]. 2025-09-23T00:54:19,816 | WARN | opendaylight-cluster-data-shard-dispatcher-38 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Store Tx member-2-datastore-operational-fe-0-txn-3-0: Conflicting modification for path /(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)streams/stream/stream[{(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)name=NETCONF}]. 2025-09-23T00:54:19,825 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_GRANTS does not exist, creating it 2025-09-23T00:54:19,916 | INFO | Blueprint Extender: 1 | AAAShiroProvider | 172 - org.opendaylight.aaa.shiro - 0.21.0 | AAAShiroProvider Session Initiated 2025-09-23T00:54:20,027 | INFO | Blueprint Extender: 1 | IniSecurityManagerFactory | 171 - org.opendaylight.aaa.repackaged-shiro - 0.21.0 | Realms have been explicitly set on the SecurityManager instance - auto-setting of realms will not occur. 2025-09-23T00:54:20,051 | ERROR | Blueprint Extender: 1 | MdsalRestconfServer | 279 - org.opendaylight.netconf.restconf-server-mdsal - 9.0.0 | bundle org.opendaylight.netconf.restconf-server-mdsal:9.0.0 (279)[org.opendaylight.restconf.server.mdsal.MdsalRestconfServer(76)] : Constructor argument 5 in class class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer has unsupported type [Lorg.opendaylight.restconf.server.spi.RpcImplementation; 2025-09-23T00:54:20,079 | WARN | opendaylight-cluster-data-shard-dispatcher-39 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Store Tx member-1-datastore-operational-fe-0-txn-3-0: Conflicting modification for path /(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)streams/stream/stream[{(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)name=NETCONF}]. 2025-09-23T00:54:20,144 | INFO | Blueprint Extender: 1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278]] 2025-09-23T00:54:20,145 | INFO | paxweb-config-1-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-11,contextPath='/rests'} 2025-09-23T00:54:20,145 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}", size=2} 2025-09-23T00:54:20,145 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-11,contextPath='/rests'} 2025-09-23T00:54:20,146 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@6afaa571{/rests,null,STOPPED} 2025-09-23T00:54:20,147 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@6afaa571{/rests,null,STOPPED} 2025-09-23T00:54:20,152 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-09-23T00:54:20,153 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=2} 2025-09-23T00:54:20,153 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T00:54:20,154 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/rests" with default Osgi Context OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} 2025-09-23T00:54:20,157 | INFO | paxweb-config-1-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-09-23T00:54:20,157 | INFO | paxweb-config-1-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-09-23T00:54:20,157 | INFO | paxweb-config-1-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@6afaa571{/rests,null,AVAILABLE} 2025-09-23T00:54:20,157 | INFO | paxweb-config-1-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}} as OSGi service for "/rests" context path 2025-09-23T00:54:20,158 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T00:54:20,159 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-09-23T00:54:20,159 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=2} 2025-09-23T00:54:20,159 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T00:54:20,159 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T00:54:20,160 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-09-23T00:54:20,160 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=1} 2025-09-23T00:54:20,160 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-09-23T00:54:20,160 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /rests with 4 service(s) 2025-09-23T00:54:20,161 | INFO | paxweb-config-1-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-18,contextPath='/.well-known'} 2025-09-23T00:54:20,161 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=321, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}", size=2} 2025-09-23T00:54:20,161 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-18,contextPath='/.well-known'} 2025-09-23T00:54:20,162 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=321, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@37a4998b{/.well-known,null,STOPPED} 2025-09-23T00:54:20,163 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@37a4998b{/.well-known,null,STOPPED} 2025-09-23T00:54:20,163 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-09-23T00:54:20,163 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]}", size=2} 2025-09-23T00:54:20,163 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T00:54:20,164 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-09-23T00:54:20,164 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/.well-known" with default Osgi Context OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=321, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} 2025-09-23T00:54:20,164 | INFO | paxweb-config-1-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@37a4998b{/.well-known,null,AVAILABLE} 2025-09-23T00:54:20,164 | INFO | paxweb-config-1-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=321, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}} as OSGi service for "/.well-known" context path 2025-09-23T00:54:20,165 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T00:54:20,165 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /.well-known with 3 service(s) 2025-09-23T00:54:20,166 | INFO | Blueprint Extender: 1 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | Binding URL provider org.opendaylight.restconf.server.jaxrs.JaxRsYangLibrary@1d0a0a79 2025-09-23T00:54:20,166 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-21,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-09-23T00:54:20,167 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-21,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]}", size=1} 2025-09-23T00:54:20,167 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-21,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-09-23T00:54:20,221 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 2025-09-23T00:54:20,221 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 2025-09-23T00:54:20,222 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 2025-09-23T00:54:20,222 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 2025-09-23T00:54:20,247 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with service class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer 2025-09-23T00:54:20,247 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with base path: /restconf, default encoding: JSON, default pretty print: false 2025-09-23T00:54:20,289 | INFO | Blueprint Extender: 1 | OSGiNorthbound | 275 - org.opendaylight.netconf.restconf-nb - 9.0.0 | Global RESTCONF northbound pools started 2025-09-23T00:54:20,295 | INFO | paxweb-config-1-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-24,contextPath='/auth'} 2025-09-23T00:54:20,296 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=326, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}", size=2} 2025-09-23T00:54:20,296 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-24,contextPath='/auth'} 2025-09-23T00:54:20,297 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=326, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@5ff1918c{/auth,null,STOPPED} 2025-09-23T00:54:20,298 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@5ff1918c{/auth,null,STOPPED} 2025-09-23T00:54:20,298 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-09-23T00:54:20,299 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=2} 2025-09-23T00:54:20,299 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T00:54:20,299 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/auth" with default Osgi Context OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=326, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} 2025-09-23T00:54:20,299 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.aaa.shiro_0.21.0 [172] registered context path /auth with 4 service(s) 2025-09-23T00:54:20,300 | INFO | paxweb-config-1-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-09-23T00:54:20,300 | INFO | paxweb-config-1-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-09-23T00:54:20,300 | INFO | paxweb-config-1-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@5ff1918c{/auth,null,AVAILABLE} 2025-09-23T00:54:20,300 | INFO | paxweb-config-1-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=326, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}} as OSGi service for "/auth" context path 2025-09-23T00:54:20,300 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 has been started 2025-09-23T00:54:20,301 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.shiro_0.21.0 [172] was successfully created 2025-09-23T00:54:20,301 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T00:54:20,301 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-09-23T00:54:20,301 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T00:54:20,301 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=2} 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=1} 2025-09-23T00:54:20,302 | INFO | paxweb-config-1-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-09-23T00:54:21,140 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos: Elapsed time 25s, remaining time 274s, diag: Active {INSTALLED=0, RESOLVED=10, UNKNOWN=0, GRACE_PERIOD=0, WAITING=0, STARTING=0, ACTIVE=399, STOPPING=0, FAILURE=0} 2025-09-23T00:54:21,140 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] 2025-09-23T00:54:21,140 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | Now notifying all its registered SystemReadyListeners... 2025-09-23T00:54:21,140 | INFO | SystemReadyService-0 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | onSystemBootReady() received, starting the switch connections 2025-09-23T00:54:21,257 | INFO | epollEventLoopGroup-2-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6653 2025-09-23T00:54:21,258 | INFO | epollEventLoopGroup-2-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6653 2025-09-23T00:54:21,258 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@1802e3c6 started 2025-09-23T00:54:21,261 | INFO | epollEventLoopGroup-4-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6633 2025-09-23T00:54:21,261 | INFO | epollEventLoopGroup-4-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6633 2025-09-23T00:54:21,261 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@6e650ed5 started 2025-09-23T00:54:21,261 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | All switchConnectionProviders are up and running (2). 2025-09-23T00:57:07,525 | INFO | sshd-SshServer[7a5ec7bb](port=8101)-nio2-thread-1 | OpenSSHKeyPairProvider | 121 - org.apache.karaf.shell.ssh - 4.4.7 | Creating ssh server private key at /tmp/karaf-0.23.0/etc/host.key 2025-09-23T00:57:07,529 | INFO | sshd-SshServer[7a5ec7bb](port=8101)-nio2-thread-1 | OpenSSHKeyPairGenerator | 121 - org.apache.karaf.shell.ssh - 4.4.7 | generateKeyPair(RSA) generating host key - size=2048 2025-09-23T00:57:08,115 | INFO | sshd-SshServer[7a5ec7bb](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.170.213:45004 authenticated 2025-09-23T00:57:10,552 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/010__Cluster_Reconcilliation_Multi_DPN.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/010__Cluster_Reconcilliation_Multi_DPN.robot 2025-09-23T00:57:11,355 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Check Shards Status And Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Check Shards Status And Initialize Variables 2025-09-23T00:57:12,164 | INFO | qtp964829283-509 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-09-23T00:57:12,167 | INFO | qtp964829283-509 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-09-23T00:57:12,601 | INFO | qtp964829283-509 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication is now enabled 2025-09-23T00:57:12,601 | INFO | qtp964829283-509 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication Manager activated 2025-09-23T00:57:12,643 | INFO | qtp964829283-509 | ApiPathParser | 273 - org.opendaylight.netconf.restconf-api - 9.0.0 | Consecutive slashes in REST URLs will be rejected 2025-09-23T00:57:19,582 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Inventory Follower and Leader Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Inventory Follower and Leader Before Cluster Restart 2025-09-23T00:57:20,882 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node1 2025-09-23T00:57:24,339 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Add Bulk Flow From Follower 2025-09-23T00:57:24,904 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T00:57:25,182 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T00:57:25,703 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-09-23T00:57:25,727 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 2 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#1853796265], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present} 2025-09-23T00:57:25,727 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=2} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=2, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#1853796265], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present}} 2025-09-23T00:57:25,728 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=2} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=0}, cookie=2, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#1853796265], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present}} in 991.7 μs 2025-09-23T00:57:26,303 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.lang.UnsupportedOperationException: null at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] 2025-09-23T00:57:26,325 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | null 2025-09-23T00:57:26,326 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T00:57:26,327 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T00:57:40,812 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | Leader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Leader): At least 1 followers need to be active, Switching member-1-shard-inventory-config from Leader to IsolatedLeader 2025-09-23T00:57:40,814 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Leader) :- Switching from behavior Leader to IsolatedLeader, election term: 2 2025-09-23T00:57:40,814 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , received role change from Leader to IsolatedLeader 2025-09-23T00:57:40,814 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-inventory-config from Leader to IsolatedLeader 2025-09-23T00:57:43,593 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Current transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 has timed out after 17266 ms in state COMMIT_PENDING 2025-09-23T00:57:43,594 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 is still committing, cannot abort 2025-09-23T00:57:58,652 | WARN | opendaylight-cluster-data-shard-dispatcher-40 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Current transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 has timed out after 15059 ms in state COMMIT_PENDING 2025-09-23T00:57:58,652 | WARN | opendaylight-cluster-data-shard-dispatcher-40 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 is still committing, cannot abort 2025-09-23T00:58:13,712 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Current transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 has timed out after 15060 ms in state COMMIT_PENDING 2025-09-23T00:58:13,713 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Transaction member-2-datastore-config-fe-0-chn-5-txn-0-1 is still committing, cannot abort 2025-09-23T00:58:24,878 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:25,904 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:26,924 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:27,944 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:28,964 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:29,983 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:31,003 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:32,023 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:33,043 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:34,063 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:35,083 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:36,104 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:37,123 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:38,144 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:39,163 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:40,184 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:41,204 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:42,223 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:43,244 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:44,264 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:45,284 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:46,304 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:47,324 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$D], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:48,343 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$E], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:49,364 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$F], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:50,383 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$G], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:51,404 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$H], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:52,424 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$I], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:53,444 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$J], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:54,463 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$K], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:55,483 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$L], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:56,504 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$M], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:57,524 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$N], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:58,544 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$O], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:58:59,565 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$P], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:00,584 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:01,604 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$R], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:02,624 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$S], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:03,645 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$T], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:04,664 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$U], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:05,685 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$V], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:05,736 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Bulk Flows and Verify In Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Bulk Flows and Verify In Inventory Leader 2025-09-23T00:59:06,081 | INFO | qtp964829283-572 | StaticConfiguration | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding-over-DOM codec shortcuts are enabled 2025-09-23T00:59:06,704 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$W], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:07,726 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$X], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:08,745 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:09,773 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:10,795 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:11,816 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:12,834 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:13,855 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:14,876 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:15,896 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:16,915 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:17,936 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:18,956 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:19,977 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:20,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:22,016 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:23,036 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:24,057 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:25,076 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:26,095 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:26,116 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T00:59:27,116 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:28,136 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:29,157 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:30,177 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:31,197 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:32,217 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:33,236 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:34,256 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:35,276 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:36,298 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:37,318 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:38,338 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:39,358 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:40,377 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:41,398 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:42,418 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:43,439 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:44,458 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:45,479 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:46,499 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:47,153 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T00:59:47,519 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:48,539 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:49,559 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:50,579 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:51,599 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:52,620 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:53,640 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:54,660 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:55,679 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:56,700 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:57,720 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:58,740 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T00:59:59,760 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:00,783 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:01,802 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:02,824 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:03,841 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:04,862 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:05,882 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:06,902 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:07,923 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:08,192 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:00:08,945 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:09,972 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:10,992 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:12,012 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:13,033 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:14,053 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:15,074 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:16,094 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:17,114 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:18,133 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:19,154 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:20,175 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:21,195 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:22,216 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:23,237 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:24,255 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:25,276 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:26,298 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:27,316 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:28,336 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:29,232 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:00:29,357 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:30,377 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:31,397 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:32,416 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:33,437 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:34,458 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:35,478 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:36,498 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:37,519 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:38,538 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:39,559 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:40,579 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:41,599 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:42,618 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:43,639 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:44,659 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:45,679 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:46,517 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Before Cluster Restart 2025-09-23T01:00:46,699 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:47,721 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:48,741 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:49,760 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:50,263 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:00:50,781 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:51,800 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:52,821 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:53,841 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:54,861 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:55,882 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:56,902 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:57,925 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:58,943 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:00:59,963 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:00,983 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:02,003 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:03,023 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:04,044 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:05,067 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:06,084 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:06,127 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-14-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.023631666 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-14-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.023631666 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-14-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.023631666 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:01:07,104 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:08,124 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:09,144 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:10,165 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:11,185 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:11,303 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:01:12,205 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:13,225 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:14,246 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:15,266 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:16,286 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:17,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:18,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:19,346 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:20,367 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:21,387 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:22,408 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:23,427 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:24,448 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:25,468 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:26,489 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:27,509 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:28,529 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:29,549 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:30,569 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:31,589 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:32,342 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:01:32,610 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:33,630 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:34,650 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:35,670 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:36,690 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:37,710 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:38,730 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:39,749 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:40,768 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:41,787 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:42,807 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:43,826 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:44,846 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:45,865 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:46,885 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:47,904 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:48,924 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:49,944 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:50,963 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:51,982 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:53,001 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:53,383 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:01:54,020 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:55,040 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:56,060 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:57,079 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:58,099 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:01:59,118 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:00,138 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:01,158 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:02,177 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:03,197 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:04,216 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:05,236 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:06,255 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:07,275 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:08,295 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:09,313 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:10,333 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:11,353 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:12,372 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:13,392 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:14,414 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:14,422 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:02:15,434 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:16,451 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:17,470 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:18,490 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:19,510 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:20,529 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:21,550 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:22,570 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:23,589 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:24,608 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:25,628 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:26,648 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:27,049 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 and Exit" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 and Exit 2025-09-23T01:02:27,593 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:02:27,593 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:02:27,668 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:28,111 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:02:28,688 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:29,707 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:29,792 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Reconnect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Reconnect To Follower Node1 2025-09-23T01:02:30,727 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:31,746 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:32,589 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Reconnected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Reconnected To Follower Node1 2025-09-23T01:02:32,632 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:02:32,767 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:32,913 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:02:33,786 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:34,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:35,462 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:02:35,825 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:36,847 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:37,865 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:38,885 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:39,905 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:40,925 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:41,945 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$de], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:42,964 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:43,984 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:45,004 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:46,024 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$he], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:47,044 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:48,063 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:49,083 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:50,103 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:51,123 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:52,143 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:53,163 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:54,183 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:55,202 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:56,222 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:56,503 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:02:57,242 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:58,261 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:02:59,281 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:00,301 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:01,321 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$we], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:02,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:03,365 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:04,381 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:05,401 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:06,154 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-15-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.021526556 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-15-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.021526556 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-15-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.021526556 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:03:06,421 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:07,441 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:08,461 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$De], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:09,481 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:10,501 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:11,520 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:12,540 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$He], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:13,560 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:14,580 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:15,600 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:16,620 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:17,544 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:03:17,643 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:18,660 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:19,681 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:20,700 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:21,720 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:22,739 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:23,760 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:24,783 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:25,800 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:26,819 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:27,839 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$We], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:28,860 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:29,879 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:30,899 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:31,920 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:32,939 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:33,959 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:34,979 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:36,003 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:37,019 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:38,039 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:38,582 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:03:39,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:40,079 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:41,099 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:42,118 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:43,139 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:44,159 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:45,180 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:46,199 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:47,220 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:48,238 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:49,259 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:50,280 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:51,299 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$if], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:52,319 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:53,339 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:54,359 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:55,379 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:56,399 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:57,419 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:58,439 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:59,459 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:03:59,622 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:04:00,479 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:01,499 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:02,520 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:03,539 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:04,559 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:05,579 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:06,599 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:07,619 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:08,639 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:09,659 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:10,679 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:11,700 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:12,720 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:13,739 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:14,760 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:14,971 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 2025-09-23T01:04:15,413 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:04:15,413 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:04:15,780 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:15,918 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:04:16,800 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:17,600 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node2 2025-09-23T01:04:17,821 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$If], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:18,840 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:19,859 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:20,350 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Follower Node2 2025-09-23T01:04:20,663 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:04:20,742 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:04:20,879 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:20,983 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:04:21,076 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:21,900 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:22,098 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:22,920 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:23,116 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:23,941 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:24,137 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:24,960 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:25,156 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:25,980 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:26,176 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:27,000 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:27,197 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:28,020 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:28,218 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:29,040 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:29,237 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:30,060 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:30,257 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:31,080 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:31,277 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:32,100 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:32,297 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:33,122 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:33,317 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:34,140 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:34,337 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:35,160 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:35,357 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:36,181 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:36,377 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:37,200 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:37,397 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:38,221 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:38,423 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:39,240 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:39,448 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:40,260 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:40,467 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:41,281 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:41,487 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:41,702 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:04:42,301 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:42,507 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:43,321 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:43,527 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$D], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:44,340 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:44,548 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$E], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:45,361 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:45,567 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$F], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:46,382 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:46,586 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$G], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:47,402 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:47,611 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$H], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:48,421 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:48,627 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$I], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:49,441 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:49,647 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$J], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:50,461 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:50,667 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$K], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:51,482 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:51,687 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$L], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:52,502 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:52,706 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$M], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:53,522 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:53,726 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$N], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:54,541 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:54,747 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$O], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:55,562 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:55,767 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$P], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:56,583 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:56,786 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:57,602 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:57,807 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$R], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:58,621 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:58,827 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$S], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:59,642 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:04:59,847 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$T], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:00,662 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:00,867 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$U], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:01,682 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:01,886 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$V], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:02,702 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:02,743 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:05:02,906 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$W], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:03,722 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:03,927 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$X], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:04,742 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:04,947 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:05,763 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:05,967 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:06,173 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-16-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.014780411 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-16-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.014780411 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-16-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.014780411 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:05:06,782 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:06,987 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:07,803 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:08,007 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:08,823 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:09,027 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:09,842 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:10,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:10,862 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:11,066 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:11,883 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:12,087 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:12,903 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:13,107 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:13,923 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:14,126 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:14,943 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:15,147 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:15,963 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:16,167 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:16,983 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:17,187 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:18,003 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:18,206 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:19,023 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:19,234 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:20,043 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:20,257 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:21,064 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:21,277 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:22,084 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:22,302 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:23,104 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:23,317 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:23,782 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:05:24,124 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:24,336 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:25,144 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:25,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:26,163 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:26,376 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:27,184 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:27,396 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:28,204 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:28,417 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:29,225 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:29,436 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:30,244 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:30,456 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:31,264 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:31,477 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:32,285 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:32,496 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:33,305 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:33,517 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:34,324 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:34,536 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:35,345 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:35,556 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:36,365 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:36,577 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:37,385 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:37,597 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:38,405 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:38,616 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:39,426 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:39,637 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:40,445 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:40,657 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:41,465 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:41,677 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:42,485 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:42,697 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:43,505 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:43,718 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:44,525 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:44,737 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:44,822 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:05:45,546 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:45,757 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:46,565 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:46,777 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:47,586 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:47,796 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:48,606 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:48,816 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:49,626 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:49,836 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:50,646 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:50,857 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:51,667 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:51,877 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:52,686 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:52,896 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:53,706 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:53,917 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:54,727 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:54,936 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:55,746 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:55,957 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:56,767 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:56,977 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:57,786 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:57,997 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:58,806 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:59,017 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:05:59,825 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:00,037 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:00,845 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:01,056 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:01,865 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:02,077 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:02,697 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node2 2025-09-23T01:06:02,885 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:03,097 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:03,122 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:06:03,122 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:06:03,629 | INFO | node-cleaner-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:06:03,905 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:04,116 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:04,925 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:05,136 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:05,277 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Inventory Leader 2025-09-23T01:06:05,862 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:06:05,948 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:06,157 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:06,965 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:07,177 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:07,930 | INFO | epollEventLoopGroup-5-1 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.171.238:33968, NodeId:null 2025-09-23T01:06:07,953 | INFO | epollEventLoopGroup-5-2 | ConnectionAdapterImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Hello received 2025-09-23T01:06:07,963 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connected. 2025-09-23T01:06:07,963 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | No context chain found for device: openflow:1, creating new. 2025-09-23T01:06:07,963 | INFO | epollEventLoopGroup-5-2 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Device connected to controller, Device:/10.30.171.238:33972, NodeId:Uri{value=openflow:1} 2025-09-23T01:06:07,985 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:07,987 | INFO | epollEventLoopGroup-5-2 | RoleContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started timer for setting SLAVE role on device openflow:1 if no role will be set in 20s. 2025-09-23T01:06:08,073 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Leader 2025-09-23T01:06:08,073 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-09-23T01:06:08,153 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-09-23T01:06:08,154 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting DeviceContextImpl[NEW] service for node openflow:1 2025-09-23T01:06:08,163 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RpcContextImpl[NEW] service for node openflow:1 2025-09-23T01:06:08,180 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting StatisticsContextImpl[NEW] service for node openflow:1 2025-09-23T01:06:08,180 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RoleContextImpl[NEW] service for node openflow:1 2025-09-23T01:06:08,181 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | SetRole called with input:SetRoleInput{controllerRole=BECOMEMASTER, node=NodeRef{value=DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] ]}} 2025-09-23T01:06:08,182 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Requesting state change to BECOMEMASTER 2025-09-23T01:06:08,182 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | RoleChangeTask called on device:openflow:1 OFPRole:BECOMEMASTER 2025-09-23T01:06:08,182 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | getGenerationIdFromDevice called for device: openflow:1 2025-09-23T01:06:08,185 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started clustering services for node openflow:1 2025-09-23T01:06:08,187 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange called for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-09-23T01:06:08,190 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange onSuccess for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-09-23T01:06:08,196 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:08,199 | INFO | ofppool-0 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Triggering reconciliation for device NodeKey{id=Uri{value=openflow:1}} 2025-09-23T01:06:09,004 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:09,217 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:10,024 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:10,237 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:11,044 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:11,257 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:12,064 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:12,276 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:13,083 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:13,299 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:14,104 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:14,316 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:15,124 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:15,338 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:16,144 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:16,357 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:17,163 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:17,377 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:18,185 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:18,397 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:19,203 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:19,416 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:20,223 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:20,437 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:21,242 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:21,457 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:22,262 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:22,476 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:23,282 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:23,497 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:24,302 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:24,517 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:25,322 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:25,537 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:26,342 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:26,557 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:26,903 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:06:27,362 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:27,576 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:28,382 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:28,599 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:29,401 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:29,617 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:30,422 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:30,637 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:31,441 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:31,657 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:32,462 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:32,677 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:33,481 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:33,697 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:34,501 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:34,717 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:35,521 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:35,737 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:36,542 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:36,757 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:37,561 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:37,777 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:38,581 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:38,797 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:39,600 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:39,817 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:40,620 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:40,837 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:41,641 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:41,858 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:42,661 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:42,877 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:43,681 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:43,896 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:44,700 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:44,916 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:45,720 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:45,936 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:46,739 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:46,957 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:47,760 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:47,943 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:06:47,977 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:48,780 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:48,997 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:49,800 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:50,016 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:50,819 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:51,038 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:51,839 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:52,059 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:52,860 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:53,077 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:53,880 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:54,097 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:54,899 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:55,117 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:55,919 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:56,137 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:56,939 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:57,157 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:57,963 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:58,177 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:58,978 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:59,197 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:06:59,999 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:00,217 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:01,020 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:01,237 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:02,039 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:02,256 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:03,058 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:03,277 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:04,078 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:04,297 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:05,099 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:05,317 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:06,118 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:06,203 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-17-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.026453271 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-17-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.026453271 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-17-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.026453271 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:07:06,337 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:07,138 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:07,356 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:08,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:08,377 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:08,982 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:07:09,178 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:09,397 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:10,198 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:10,417 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:11,218 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:11,437 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:12,238 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:12,456 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:13,258 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:13,477 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:14,278 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:14,498 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:15,298 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:15,517 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:16,318 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:16,537 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:17,337 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:17,556 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:18,357 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:18,577 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:19,379 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:19,596 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:20,398 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:20,616 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:21,418 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:21,637 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:22,438 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:22,656 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:23,458 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:23,676 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:24,477 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:24,697 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:25,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:25,717 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:26,517 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:26,737 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:27,537 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:27,757 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:28,557 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:28,777 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:29,577 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:29,797 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:30,022 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:07:30,597 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:30,817 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:31,617 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:31,837 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:32,637 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:32,858 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:33,657 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:33,877 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:34,678 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:34,897 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:35,696 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:35,917 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:36,717 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:36,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:37,736 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:37,957 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:38,757 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:38,976 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:39,777 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:39,997 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:40,797 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:41,017 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:41,816 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:42,037 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:42,836 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:43,057 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:43,859 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:44,077 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:44,876 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:45,096 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:45,897 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:46,116 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:46,917 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:47,137 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:47,936 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:48,156 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:48,508 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Inventory Leader 2025-09-23T01:07:48,763 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.MacAddress 2025-09-23T01:07:48,764 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.PhysAddress 2025-09-23T01:07:48,765 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.HexString 2025-09-23T01:07:48,766 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.DottedQuad 2025-09-23T01:07:48,767 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.Uuid 2025-09-23T01:07:48,812 | INFO | epollEventLoopGroup-5-2 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.171.238:33972, NodeId:openflow:1 2025-09-23T01:07:48,812 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 disconnected. 2025-09-23T01:07:48,812 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-09-23T01:07:48,813 | WARN | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Reconciliation framework failure for device openflow:1 java.util.concurrent.CancellationException: Task was cancelled. at com.google.common.util.concurrent.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1021) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:288) ~[?:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:235) ~[?:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[?:?] at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:246) ~[?:?] at com.google.common.util.concurrent.Futures.getDone(Futures.java:1175) ~[?:?] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1123) ~[?:?] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1004) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:767) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.cancel(AbstractFuture.java:372) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.cancel(FluentFuture.java:120) ~[?:?] at org.opendaylight.openflowplugin.applications.reconciliation.impl.ReconciliationManagerImpl.cancelNodeReconciliation(ReconciliationManagerImpl.java:138) ~[?:?] at org.opendaylight.openflowplugin.applications.reconciliation.impl.ReconciliationManagerImpl.onDeviceDisconnected(ReconciliationManagerImpl.java:115) ~[?:?] at org.opendaylight.openflowplugin.impl.mastership.MastershipChangeServiceManagerImpl.becomeSlaveOrDisconnect(MastershipChangeServiceManagerImpl.java:101) ~[?:?] at org.opendaylight.openflowplugin.impl.lifecycle.ContextChainHolderImpl.destroyContextChain(ContextChainHolderImpl.java:363) ~[?:?] at org.opendaylight.openflowplugin.impl.lifecycle.ContextChainHolderImpl.onDeviceDisconnected(ContextChainHolderImpl.java:273) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.propagateDeviceDisconnectedEvent(ConnectionContextImpl.java:179) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.disconnectDevice(ConnectionContextImpl.java:168) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.onConnectionClosed(ConnectionContextImpl.java:126) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.listener.SystemNotificationsListenerImpl.onDisconnect(SystemNotificationsListenerImpl.java:86) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.ConnectionAdapterImpl.consumeDeviceMessage(ConnectionAdapterImpl.java:121) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.AbstractConnectionAdapterStatistics.consume(AbstractConnectionAdapterStatistics.java:68) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.ConnectionAdapterImpl.consume(ConnectionAdapterImpl.java:62) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.DelegatingInboundHandler.channelInactive(DelegatingInboundHandler.java:53) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.AbstractOutboundQueueManager.channelInactive(AbstractOutboundQueueManager.java:169) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81) ~[?:?] at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:284) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:412) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:377) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:412) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:377) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1424) ~[?:?] at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:876) ~[?:?] at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:684) ~[?:?] at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:148) ~[?:?] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:141) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:507) ~[?:?] at io.netty.channel.SingleThreadIoEventLoop.run(SingleThreadIoEventLoop.java:182) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:1073) ~[?:?] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-09-23T01:07:48,815 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-09-23T01:07:48,820 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-09-23T01:07:48,820 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Role SLAVE was granted to device openflow:1 2025-09-23T01:07:48,821 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RoleContextImpl[RUNNING] service for node openflow:1 2025-09-23T01:07:48,821 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping StatisticsContextImpl[RUNNING] service for node openflow:1 2025-09-23T01:07:48,820 | WARN | pool-29-thread-1 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Fail with read Config/DS for Node DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] @ urn.opendaylight.flow.inventory.rev130819.FlowCapableNode ] ! java.lang.InterruptedException: null at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:249) ~[?:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[?:?] at org.opendaylight.openflowplugin.applications.frm.impl.FlowNodeReconciliationImpl$ReconciliationTask.call(FlowNodeReconciliationImpl.java:354) ~[?:?] at org.opendaylight.openflowplugin.applications.frm.impl.FlowNodeReconciliationImpl$ReconciliationTask.call(FlowNodeReconciliationImpl.java:336) ~[?:?] at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:128) ~[?:?] at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74) ~[?:?] at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:80) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-09-23T01:07:48,821 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-09-23T01:07:48,823 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RpcContextImpl[RUNNING] service for node openflow:1 2025-09-23T01:07:48,824 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping DeviceContextImpl[RUNNING] service for node openflow:1 2025-09-23T01:07:48,834 | INFO | epollEventLoopGroup-5-2 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services registration for node openflow:1 2025-09-23T01:07:48,834 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating DeviceContextImpl[TERMINATED] service for node openflow:1 2025-09-23T01:07:48,835 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RpcContextImpl[TERMINATED] service for node openflow:1 2025-09-23T01:07:48,835 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating StatisticsContextImpl[TERMINATED] service for node openflow:1 2025-09-23T01:07:48,835 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-09-23T01:07:48,835 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RoleContextImpl[TERMINATED] service for node openflow:1 2025-09-23T01:07:48,835 | INFO | ofppool-1 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services for node openflow:1 2025-09-23T01:07:48,834 | WARN | ofppool-0 | DeviceContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Error processing port status message for port 1 on device 1 org.opendaylight.mdsal.binding.api.TransactionChainClosedException: Cannot write into transaction. at org.opendaylight.openflowplugin.common.txchain.TransactionChainManager.writeToTransaction(TransactionChainManager.java:243) ~[?:?] at org.opendaylight.openflowplugin.impl.device.DeviceContextImpl.writeToTransaction(DeviceContextImpl.java:244) ~[?:?] at org.opendaylight.openflowplugin.impl.device.DeviceContextImpl.lambda$writePortStatusMessage$1(DeviceContextImpl.java:374) ~[?:?] at com.google.common.collect.ImmutableList.forEach(ImmutableList.java:421) ~[?:?] at org.opendaylight.openflowplugin.impl.device.DeviceManagerImpl.lambda$new$0(DeviceManagerImpl.java:94) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractQueuedNotificationManager.executeBatch(AbstractQueuedNotificationManager.java:88) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractBatchingExecutor$DispatcherTask.invokeWorker(AbstractBatchingExecutor.java:305) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractBatchingExecutor$DispatcherTask.run(AbstractBatchingExecutor.java:292) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-09-23T01:07:48,837 | ERROR | ofppool-0 | AbstractBatchingExecutor | 358 - org.opendaylight.yangtools.util - 14.0.14 | port-status-queue: Error invoking worker 1 with [org.opendaylight.openflowplugin.impl.device.DeviceContextImpl$$Lambda/0x00000007c17caa60@2aa7df7b] java.lang.NullPointerException: Cannot invoke "org.opendaylight.openflowplugin.common.txchain.TransactionChainManager.releaseWriteTransactionLock()" because "this.transactionChainManager" is null at org.opendaylight.openflowplugin.impl.device.DeviceContextImpl.releaseWriteTransactionLock(DeviceContextImpl.java:620) ~[?:?] at org.opendaylight.openflowplugin.impl.device.DeviceContextImpl.lambda$writePortStatusMessage$1(DeviceContextImpl.java:388) ~[?:?] at com.google.common.collect.ImmutableList.forEach(ImmutableList.java:421) ~[?:?] at org.opendaylight.openflowplugin.impl.device.DeviceManagerImpl.lambda$new$0(DeviceManagerImpl.java:94) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractQueuedNotificationManager.executeBatch(AbstractQueuedNotificationManager.java:88) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractBatchingExecutor$DispatcherTask.invokeWorker(AbstractBatchingExecutor.java:305) ~[?:?] at org.opendaylight.yangtools.util.concurrent.AbstractBatchingExecutor$DispatcherTask.run(AbstractBatchingExecutor.java:292) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-09-23T01:07:48,863 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-09-23T01:07:48,903 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-09-23T01:07:48,956 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:49,177 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:49,406 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:07:49,976 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:50,196 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:50,997 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:51,063 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:07:51,207 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Delete All Flows From Follower Node1 2025-09-23T01:07:51,217 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:52,016 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:52,237 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:53,036 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:53,256 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:54,056 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:54,277 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:55,076 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:55,297 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:56,097 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:56,317 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:57,116 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:57,336 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:58,136 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:58,356 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:59,157 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:07:59,376 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:00,176 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:00,397 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:01,196 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:01,418 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:02,217 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:02,437 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:03,236 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:03,457 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:04,256 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:04,477 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:05,276 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:05,496 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:06,296 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:06,517 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:07,316 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:07,536 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:08,336 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:08,557 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:09,356 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:09,576 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:10,597 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:11,325 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:11,617 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:12,103 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:08:12,644 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:13,667 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:14,687 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:15,707 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:16,330 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:16,726 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:17,356 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:17,747 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:18,767 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:19,053 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:19,788 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:20,075 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:20,807 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:21,096 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:21,827 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:22,115 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:22,846 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:23,136 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:23,867 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:24,155 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:24,886 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:25,175 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:25,907 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:26,196 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:26,927 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:27,216 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:27,946 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:28,235 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:28,966 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:29,255 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:29,987 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:30,276 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:31,007 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:31,295 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:32,027 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:32,315 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:33,046 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:33,143 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:08:33,336 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:34,067 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:34,356 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:35,087 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:35,376 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:36,107 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:36,396 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:37,127 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:37,416 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:38,147 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$de], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:38,436 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:39,167 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:39,456 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:40,186 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:40,476 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:41,207 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:41,495 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:42,227 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$he], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:42,516 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:43,246 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:43,536 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:44,266 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:44,556 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:45,287 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:45,576 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:46,307 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:46,596 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:47,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:47,617 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:48,346 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:48,636 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:49,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:49,655 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:50,387 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:50,676 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:51,407 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:51,696 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:52,427 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:52,717 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:53,446 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:53,736 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:54,183 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:08:54,466 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:54,756 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:55,487 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:55,776 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:56,507 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:56,796 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:57,526 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$we], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:57,816 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:58,546 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:58,836 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:59,566 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:08:59,857 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:00,586 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:00,877 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:01,607 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:01,897 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:02,627 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:02,917 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:03,647 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:03,936 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:04,666 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$De], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:04,957 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:05,687 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:05,977 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:06,234 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-19-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.025444268 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-19-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.025444268 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-19-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.025444268 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:09:06,707 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:07,000 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:07,726 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:08,017 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:08,746 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$He], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:09,037 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:09,767 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:10,057 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:10,786 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:11,077 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:11,806 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:12,097 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:12,827 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:13,117 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:13,847 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:14,137 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:14,866 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:15,157 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:15,222 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:09:15,886 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:16,177 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:16,906 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:17,197 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:17,926 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:18,217 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:18,946 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:19,237 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:19,967 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:20,257 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:20,986 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:21,277 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:22,007 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:22,297 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:23,026 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:23,317 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:24,047 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$We], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:24,338 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:25,066 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:25,357 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:26,086 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:26,378 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:27,106 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:27,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:28,126 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:28,418 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:29,146 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:29,437 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:30,167 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:30,458 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:31,187 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:31,477 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:32,207 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:32,498 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:32,583 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify No Flows In Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify No Flows In Inventory Leader 2025-09-23T01:09:33,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:33,518 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:34,246 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:34,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:35,267 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:35,558 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:36,262 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:09:36,287 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:36,578 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:37,307 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:37,599 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:38,326 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:38,618 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:39,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:39,639 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:40,366 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:40,658 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:41,386 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:41,678 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:42,406 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:42,698 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:43,427 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:43,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:44,447 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:44,738 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:45,467 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:45,758 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:46,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:46,778 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:47,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:47,798 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:48,527 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$if], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:48,819 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:49,546 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:49,839 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:50,566 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:50,860 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:51,586 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:51,879 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:52,606 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:52,898 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:53,626 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:53,918 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:54,647 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:54,939 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:55,666 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:55,964 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:56,686 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:56,979 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:57,302 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:09:57,707 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:57,999 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:58,727 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:59,019 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:09:59,746 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:00,039 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:00,766 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:01,059 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:01,787 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:02,080 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:02,806 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:03,099 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:03,826 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:04,119 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:04,847 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:05,139 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:05,867 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:06,159 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:06,887 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:07,180 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:07,907 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:08,199 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:08,926 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:09,220 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:09,947 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:10,239 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:10,967 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:11,260 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:11,988 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:12,280 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:13,006 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:13,300 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:14,026 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:14,320 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:15,046 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$If], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:15,340 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:16,066 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:16,360 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:17,086 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:17,380 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:18,106 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:18,342 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:10:18,400 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:19,126 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:19,421 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:20,147 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:20,440 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:21,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:21,461 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$el], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:22,187 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:22,481 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:23,207 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:23,500 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:24,227 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:24,520 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:25,247 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:25,541 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:26,266 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:26,561 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:27,287 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:27,581 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:28,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:28,601 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:29,327 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:29,621 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:30,346 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:30,640 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:31,367 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:31,661 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:32,386 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:32,681 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:33,406 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:33,701 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:34,426 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:34,722 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:35,446 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:35,742 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:36,466 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:36,761 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:37,487 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:37,781 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:38,507 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:38,802 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:39,383 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:10:39,527 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:39,822 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:40,547 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:40,841 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:41,566 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:41,862 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:42,587 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:42,881 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:43,606 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:43,901 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:44,626 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:44,921 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:45,651 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:45,942 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:46,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:46,962 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:47,686 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:47,982 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$El], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:48,707 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:49,001 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:49,726 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:50,022 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:50,747 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:51,042 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:51,766 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:52,062 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:52,786 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:53,083 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:53,806 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:54,102 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:54,827 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:55,123 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:55,847 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:56,142 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:56,867 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:57,162 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:57,887 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:58,183 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:58,907 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:59,202 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:10:59,926 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:00,222 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:00,422 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:11:00,946 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:01,242 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:01,966 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:02,263 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:02,987 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:03,283 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:04,006 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:04,303 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:05,026 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:05,322 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:06,047 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:06,263 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-20-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.024711184 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-20-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.024711184 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-20-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.024711184 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:11:06,342 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:07,067 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:07,362 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:08,086 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:08,384 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:09,107 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:09,403 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:10,126 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:10,423 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:11,146 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:11,443 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:12,167 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:12,463 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:13,186 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:13,484 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:14,206 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:14,503 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:15,226 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:15,523 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:15,715 | INFO | sshd-SshServer[7a5ec7bb](port=8101)-nio2-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.170.213:45018 authenticated 2025-09-23T01:11:16,245 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:16,544 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:16,593 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/020__Cluster_HA_Data_Recovery_BulkFlow_2Node_Cluster.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/020__Cluster_HA_Data_Recovery_BulkFlow_2Node_Cluster.robot 2025-09-23T01:11:16,988 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status and Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status and Initialize Variables 2025-09-23T01:11:17,267 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:17,564 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:18,286 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:18,584 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:19,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:19,604 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:19,770 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower Before Leader Restart 2025-09-23T01:11:20,326 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:20,623 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:21,346 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:21,462 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:11:21,644 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:22,366 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:22,664 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:23,386 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:23,684 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:24,406 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:24,704 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:25,426 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:25,724 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:26,446 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:26,744 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:27,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:27,764 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:28,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:28,784 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:29,507 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:29,804 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:30,263 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Leader From Cluster Node 2025-09-23T01:11:30,528 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:30,653 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Shutdown 2025-09-23T01:11:30,824 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:31,547 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:31,844 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:32,566 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:32,864 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:32,904 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-21-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.017807695 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-21-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.017807695 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-21-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.1}]} timed out after 120.017807695 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:11:33,586 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:33,885 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:34,606 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:34,905 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:35,626 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:35,924 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:36,646 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:36,944 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:37,667 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:37,965 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:38,691 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:38,985 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:39,707 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:40,005 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:40,727 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:41,025 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:41,746 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:42,045 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:42,502 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:11:42,766 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:43,065 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:43,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:44,086 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:44,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:45,105 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:45,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:46,125 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:46,846 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:47,145 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:47,866 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:48,166 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:48,886 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:49,186 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:49,906 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:50,206 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:50,927 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:51,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:51,947 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:52,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:52,967 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:53,266 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:53,986 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:54,286 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:55,006 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:55,307 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:56,026 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:56,326 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:57,046 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:57,346 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:58,067 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:58,367 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:59,086 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:11:59,387 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:00,107 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:00,406 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:01,126 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:01,426 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:02,147 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:02,447 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:03,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:03,467 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:03,543 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:12:04,186 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:04,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:05,206 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:05,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:06,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:06,527 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:07,245 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:07,547 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:08,266 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:08,567 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:09,286 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:09,587 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:10,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:10,607 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:11,326 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:11,628 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:12,345 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:12,647 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:13,365 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:13,668 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:14,386 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:14,687 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:15,407 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:15,707 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:16,426 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:16,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:17,447 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:17,748 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:18,466 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:18,768 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:19,486 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:19,788 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:20,506 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:20,808 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:21,527 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:21,828 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:22,547 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:22,848 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:23,566 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:23,868 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:24,583 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:12:24,587 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:24,889 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:25,606 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:25,912 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:26,626 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:26,928 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:27,646 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:27,948 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$an], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:28,666 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:28,969 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:29,686 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:29,989 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:30,706 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:31,009 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:31,727 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:32,028 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$en], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:32,747 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:33,049 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:33,766 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:34,069 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:34,786 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:35,089 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:35,806 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:36,109 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$in], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:36,826 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:37,129 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:37,847 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:38,148 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:38,867 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:39,168 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:39,886 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:40,189 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:40,907 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:41,209 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:41,926 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:42,229 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$on], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:42,946 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:43,249 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:43,966 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:44,270 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:44,988 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:45,289 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:45,622 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:12:46,007 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:46,720 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:47,027 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:47,739 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:48,046 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:48,760 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:49,066 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:49,780 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:50,086 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:50,800 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:51,107 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:51,820 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:52,126 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:52,840 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:53,146 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:53,860 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:54,167 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:54,880 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$An], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:55,186 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:55,901 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:56,206 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:56,920 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:57,227 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:57,940 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:58,247 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:58,960 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$En], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:59,267 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:12:59,980 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:00,286 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:01,000 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:01,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:02,020 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:02,327 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:03,041 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$In], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:03,347 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:04,061 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:04,366 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:05,081 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:05,386 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:06,101 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:06,294 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-22-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.026469744 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-22-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.026469744 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-22-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.026469744 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:13:06,407 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:06,662 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:13:07,121 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:07,426 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:08,141 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:08,446 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:09,162 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$On], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:09,467 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:10,181 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:10,486 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:11,202 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:11,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:11,535 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shard Status For Leader After PreLeader Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shard Status For Leader After PreLeader Shutdown 2025-09-23T01:13:11,960 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node1 2025-09-23T01:13:12,222 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:12,367 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower 2025-09-23T01:13:12,527 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:12,788 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader 2025-09-23T01:13:13,204 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Cluster Restart 2025-09-23T01:13:13,242 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:13,546 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:13,598 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Pre Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Pre Leader From Cluster Node 2025-09-23T01:13:13,993 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Restart 2025-09-23T01:13:14,263 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:14,566 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:15,282 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:15,586 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:16,302 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:16,606 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:17,322 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:17,626 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:18,342 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:18,647 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:19,363 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:19,666 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:20,382 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:20,686 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:21,402 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:21,707 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:22,422 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:22,726 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:23,442 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:23,746 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:24,463 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:24,766 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:25,482 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:25,787 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:26,502 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:26,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:27,522 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:27,703 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:13:27,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:28,542 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:28,846 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:29,563 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:29,867 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:30,583 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:30,887 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:31,603 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:31,907 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:32,622 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:32,923 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-23-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.01359515 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-23-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.01359515 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-23-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.2}]} timed out after 120.01359515 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:13:32,926 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:33,643 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:33,946 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:34,663 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:34,969 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:35,684 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:35,986 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:36,704 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:37,006 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:37,723 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:38,026 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:38,744 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:39,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:39,764 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:40,066 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:40,784 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:41,086 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:41,803 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:42,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:42,824 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:43,126 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:43,844 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:44,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:44,864 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:45,168 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:45,884 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:46,186 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:46,904 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$no], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:47,206 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:47,925 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:48,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:48,743 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:13:48,945 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:49,246 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:49,964 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:50,266 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:50,984 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:51,287 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:52,005 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$so], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:52,306 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:53,024 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$to], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:53,326 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:54,044 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:54,347 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:55,065 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:55,367 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:56,085 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:56,386 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:57,105 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:57,406 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:58,125 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:58,426 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:59,144 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:13:59,447 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:00,165 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:00,466 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:01,185 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:01,486 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:02,205 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:02,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:03,225 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:03,526 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:04,245 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:04,547 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:05,269 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:05,567 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:06,286 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:06,586 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:07,307 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:07,607 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:08,326 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:08,628 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:09,345 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:09,646 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:09,782 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:14:10,365 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:10,666 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:11,386 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:11,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:12,405 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:12,706 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:13,426 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$No], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:13,727 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:14,446 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:14,746 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:15,466 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:15,765 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:16,486 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:16,786 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:17,506 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:17,806 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:18,526 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$So], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:18,827 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:19,547 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$To], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:19,846 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:20,567 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:20,867 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:21,589 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:21,887 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:22,607 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:22,907 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:23,627 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:23,926 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:24,646 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:24,946 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:25,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:25,967 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:26,687 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:26,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:27,706 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:28,007 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:28,727 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:29,026 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:29,747 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:30,046 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:30,767 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:30,824 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:14:31,067 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:31,786 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:32,087 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:32,806 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:33,107 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:33,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:34,126 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:34,846 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:35,147 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:35,866 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:36,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:36,886 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:37,186 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:37,906 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:38,206 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:38,925 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:39,227 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:39,946 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:40,246 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:40,966 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:41,266 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:41,986 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:42,287 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:43,006 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:43,306 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:44,025 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:44,326 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:45,045 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:45,346 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:46,066 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:46,366 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:47,085 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:47,386 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:48,105 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:48,405 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:49,125 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:49,426 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:50,145 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:50,446 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:51,164 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:51,467 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:51,862 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:14:52,185 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:52,487 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:53,204 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:53,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:54,225 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:54,527 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:55,245 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:55,547 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:56,264 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:56,566 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:57,284 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:57,587 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:58,304 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:58,606 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:59,324 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:14:59,626 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:00,344 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:00,646 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:01,364 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:01,666 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:02,384 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:02,687 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:03,405 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:03,706 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:04,423 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:04,727 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:05,444 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:05,746 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:06,323 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-24-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.024941095 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-24-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.024941095 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-24-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.024941095 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:15:06,464 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:06,767 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:07,483 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:07,786 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:08,664 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:08,807 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:09,794 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:09,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:10,812 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:10,846 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:11,833 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:11,866 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:12,853 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:12,886 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:12,902 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:15:13,874 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:13,906 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:14,926 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:15,576 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:15,947 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:16,592 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:16,966 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:17,613 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:17,986 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:18,632 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:19,006 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:19,653 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:20,026 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:20,672 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:21,046 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:21,692 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:22,066 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:22,712 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:23,086 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:23,735 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:24,106 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:24,752 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:25,126 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:25,772 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:26,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:26,792 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:27,166 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:27,811 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:28,186 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:28,832 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:29,206 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:29,852 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:30,227 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:30,875 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:31,246 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:31,891 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:32,266 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:32,911 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:32,955 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-25-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.026422292 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-25-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.026422292 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-25-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.3}]} timed out after 120.026422292 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:15:33,286 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:33,932 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:33,943 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:15:34,306 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:34,952 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:35,327 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:35,971 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:36,347 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:36,992 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:37,366 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:38,011 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:38,387 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:39,031 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:39,407 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:40,051 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:40,427 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:41,071 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:41,447 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:42,091 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:42,466 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:43,111 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:43,486 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:44,130 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:44,506 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:45,151 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:45,527 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:46,170 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:46,546 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:47,191 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:47,566 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:48,211 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:48,585 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:49,230 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:49,606 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:50,250 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:50,627 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:51,270 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:51,646 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:52,290 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:52,667 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:53,310 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:53,690 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:54,330 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:54,707 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:54,982 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:15:55,350 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:55,727 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:56,370 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:56,747 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:57,390 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:57,766 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:58,410 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:58,787 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:59,430 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:15:59,806 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:00,450 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:00,826 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:01,469 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:01,846 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:02,490 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:02,867 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:03,509 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:03,886 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:04,529 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:04,907 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:05,549 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:05,926 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:06,569 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:06,946 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:07,590 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:07,966 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:08,610 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:08,988 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:09,629 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:10,006 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:10,649 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:11,026 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:11,669 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:12,047 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:12,690 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:13,067 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:13,709 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:14,087 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:14,729 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:15,106 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:15,750 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:16,023 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:16:16,127 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$el], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:16,769 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:17,146 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:17,788 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:18,166 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:18,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:19,187 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:19,828 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:20,206 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:20,848 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:21,227 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:21,869 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:22,247 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:22,889 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:23,266 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:23,909 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:24,286 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:24,930 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:25,306 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:25,949 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:26,326 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:26,969 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:27,346 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:27,988 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:28,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:29,008 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:29,386 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:30,028 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:30,407 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:31,048 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:31,426 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:32,068 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:32,446 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:33,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:33,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:34,108 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:34,487 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:35,128 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:35,506 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:36,148 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:36,527 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:37,063 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:16:37,167 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:37,546 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:38,188 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:38,566 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:39,208 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:39,586 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:40,228 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:40,606 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:41,248 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:41,627 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:42,268 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:42,647 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$El], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:43,288 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:43,666 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:44,308 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:44,686 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:45,327 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:45,706 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:46,348 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:46,727 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:47,367 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:47,746 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:48,388 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:48,766 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:49,408 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:49,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:50,428 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:50,806 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:51,447 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:51,827 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:52,468 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:52,846 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:53,488 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:53,866 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:54,508 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:54,886 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:55,527 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:55,906 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:56,547 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:56,926 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:57,567 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:57,947 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:58,102 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:16:58,587 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:58,966 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:59,607 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:16:59,987 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:00,627 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:01,006 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:01,647 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:02,027 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:02,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:03,047 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:03,687 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:04,066 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:04,707 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:05,087 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:05,727 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:06,107 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:06,353 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-26-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.025453369 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-26-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.025453369 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-26-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.025453369 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:17:06,748 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:07,128 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:07,767 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:08,146 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:08,787 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:09,166 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:09,807 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:10,186 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:10,827 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:11,206 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:11,846 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:12,227 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:12,867 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:13,246 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:13,887 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:14,266 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:14,907 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:15,287 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:15,927 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:16,307 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:16,946 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:17,326 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:17,966 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:18,346 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:18,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:19,143 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:17:19,366 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:20,007 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:20,387 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:21,026 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:21,407 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:22,046 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:22,426 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:23,066 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:23,447 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:24,086 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:24,466 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:25,106 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:25,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:26,127 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:26,509 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:27,146 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:27,526 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:28,171 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:28,546 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:29,187 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:29,567 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:30,206 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:30,586 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:31,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:31,606 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:32,247 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:32,626 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:32,982 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-27-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.021179507 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-27-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.021179507 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-27-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.4}]} timed out after 120.021179507 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:17:33,266 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:33,647 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:34,286 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:34,666 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:35,307 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:35,686 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:36,326 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:36,707 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:37,346 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:37,726 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:38,366 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:38,746 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:39,387 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:39,767 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:40,183 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:17:40,406 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:40,786 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:41,426 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:41,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:42,446 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:42,826 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:43,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:43,846 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:44,485 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:44,866 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:45,506 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:45,886 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:46,526 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:46,906 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:47,546 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:47,925 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:48,566 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:48,946 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:49,586 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:49,967 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:50,606 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:50,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:51,626 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:52,006 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:52,646 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:53,026 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:53,665 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:54,047 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:54,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:55,066 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:55,706 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$as], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:56,086 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:56,725 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:57,106 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:57,746 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:58,127 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:58,765 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:59,146 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:17:59,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:00,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:00,806 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:01,186 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:01,222 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:18:01,826 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:02,206 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:02,845 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:03,227 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:03,866 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:04,247 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:04,886 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:05,267 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:05,906 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:06,286 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:06,926 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:07,306 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:07,946 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:08,327 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:08,965 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:09,347 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:09,986 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:10,366 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:11,005 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:11,386 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:12,026 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:12,406 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:13,045 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:13,426 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:14,066 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:14,447 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:15,086 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:15,467 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:16,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:16,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:17,126 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:17,506 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:18,146 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:18,526 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:19,166 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:19,546 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:20,186 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:20,567 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:21,205 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:21,586 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:22,227 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$As], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:22,262 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:18:22,606 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$an], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:23,245 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:23,626 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:24,265 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:24,646 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:25,285 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:25,667 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:26,305 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:26,687 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$en], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:27,327 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:27,707 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:28,345 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:28,726 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:29,366 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:29,746 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:30,386 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:30,766 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$in], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:31,406 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:31,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:32,426 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:32,806 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:33,446 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:33,826 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:34,465 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:34,846 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:35,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:35,866 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:36,505 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:36,886 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$on], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:37,906 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:38,927 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:39,252 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:39,260 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 3204 millis 2025-09-23T01:18:39,946 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:40,275 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:40,966 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:41,295 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:41,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:42,315 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:43,006 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:43,303 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:18:43,335 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:44,027 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:44,356 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:45,046 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:45,375 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:46,066 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:46,395 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:47,088 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:47,415 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:48,107 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:48,436 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:49,126 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$An], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:49,455 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:50,146 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:50,475 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:51,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:51,494 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:52,186 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:52,515 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:53,207 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$En], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:53,535 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:54,226 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:54,555 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:55,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:55,575 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:56,266 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:56,595 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:57,286 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$In], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:57,615 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:58,306 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:58,635 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:59,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:18:59,655 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:00,346 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:00,675 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:01,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:01,695 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:02,386 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:02,715 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$at], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:03,406 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$On], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:03,735 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:04,342 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:19:04,427 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:04,755 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:05,318 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Leader Restart 2025-09-23T01:19:05,446 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:05,776 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:06,375 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-28-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.015317439 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-28-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.015317439 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-28-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.015317439 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:19:06,466 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:06,795 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:07,486 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:07,816 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:08,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:08,835 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:09,526 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:09,856 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:10,546 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:10,875 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$it], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:11,566 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:11,895 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:12,586 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:12,915 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:13,606 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:13,936 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:14,627 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:14,955 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:15,646 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:15,976 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:16,666 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:16,995 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:17,686 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:18,015 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:18,706 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:19,035 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:19,726 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:20,056 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:20,746 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:21,075 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$st], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:21,766 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:22,095 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:22,786 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:23,115 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:23,806 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:24,136 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:24,826 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:25,155 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:25,383 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:19:25,846 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:26,175 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:26,866 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:27,195 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:27,885 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:28,215 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:28,906 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:29,236 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$At], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:29,926 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:30,255 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:30,947 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:31,275 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:31,966 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:32,296 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:32,986 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:33,003 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-29-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.017310056 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-29-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.017310056 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-29-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.5}]} timed out after 120.017310056 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:19:33,315 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:34,006 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:34,335 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:35,026 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:35,355 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:36,047 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:36,375 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:37,067 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:37,396 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$It], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:38,087 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:38,415 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:39,107 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:39,436 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:40,127 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:40,455 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:41,147 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$no], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:41,475 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:42,166 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:42,495 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:43,186 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:43,515 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:44,206 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:44,536 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:45,226 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:45,555 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:46,246 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$so], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:46,423 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:19:46,576 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:47,266 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$to], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:47,595 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$St], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:48,287 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:48,616 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:49,306 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:49,644 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:50,326 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:50,666 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:51,346 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:51,685 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:52,366 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:52,705 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:53,386 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:53,726 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:54,406 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:54,745 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:55,427 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:55,765 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:56,446 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:56,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:57,466 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:57,806 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:58,486 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:58,825 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:59,507 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:19:59,845 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:00,526 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:00,866 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:01,547 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:01,886 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:02,567 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:02,905 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:03,586 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:03,925 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:04,607 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:04,947 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:05,627 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:05,966 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:06,647 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:06,986 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:07,452 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:20:07,666 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$No], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:08,006 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:08,687 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:09,025 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:09,706 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:10,046 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:10,726 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:11,065 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:11,747 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:12,086 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:12,766 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$So], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:13,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:13,787 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$To], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:14,126 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:14,807 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:15,145 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:15,826 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:16,165 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:16,846 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:17,186 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:17,866 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:18,205 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:18,886 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:19,226 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:19,906 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:20,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:20,926 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:21,266 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:21,946 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:22,286 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:22,966 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:23,306 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:23,986 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:24,326 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:25,006 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:25,346 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:26,026 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:26,366 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:27,046 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:27,385 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:28,066 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:28,406 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:28,493 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:20:29,087 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:29,426 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:30,106 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:30,446 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:31,126 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:31,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:32,147 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:32,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:33,166 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:33,506 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:34,187 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:34,526 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:35,206 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:35,546 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:36,226 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:36,566 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:37,246 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:37,586 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:38,267 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:38,606 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:39,286 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:39,626 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:40,306 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:40,647 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:41,327 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:41,666 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:42,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:42,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:43,366 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:43,706 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:44,386 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:44,726 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:45,406 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:45,746 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:46,426 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:46,705 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Leader Restart 2025-09-23T01:20:46,766 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:47,118 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node1 2025-09-23T01:20:47,447 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:47,501 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node1 2025-09-23T01:20:47,786 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:47,869 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node 2025-09-23T01:20:48,284 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower And Leader Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower And Leader Before Cluster Restart 2025-09-23T01:20:48,467 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:48,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:49,487 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:49,523 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:20:49,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:50,507 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:50,847 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:51,526 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:51,866 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:52,547 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:52,886 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:53,565 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:53,906 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:54,587 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:54,926 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:55,606 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:55,947 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:56,626 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:56,966 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:57,646 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:57,987 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:58,667 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:59,007 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:20:59,639 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Follower From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Follower From Cluster Node 2025-09-23T01:20:59,686 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:00,009 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Shutdown 2025-09-23T01:21:00,027 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:00,707 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:01,046 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:01,727 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:02,066 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:02,746 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:03,086 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:03,766 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:04,107 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:04,787 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:05,127 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:05,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:06,146 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:06,403 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-30-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}]} timed out after 120.023267583 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-30-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}]} timed out after 120.023267583 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-30-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.11}]} timed out after 120.023267583 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:21:06,827 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:07,167 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:07,846 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:08,187 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:08,866 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:09,207 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:09,886 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:10,227 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:10,563 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:21:10,906 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:11,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:11,927 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:12,267 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:12,947 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:13,287 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:13,966 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:14,307 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:14,986 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:15,327 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:16,007 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:16,346 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:17,026 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:17,367 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:18,046 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:18,387 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:19,067 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:19,407 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:20,086 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:20,427 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:21,106 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:21,447 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:22,126 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:22,468 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:23,146 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:23,487 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:24,166 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:24,507 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:25,186 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:25,527 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:26,206 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:26,547 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:27,227 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:27,567 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:28,247 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:28,586 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:29,266 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:29,607 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:30,286 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:30,628 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:31,306 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:31,592 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:21:31,648 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:32,327 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:32,667 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:33,033 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-31-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.026025755 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-31-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.026025755 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-31-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.6}]} timed out after 120.026025755 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:21:33,347 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:33,687 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:34,366 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:34,707 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:35,386 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:35,728 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:36,406 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:36,747 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:37,426 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:37,767 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:38,446 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:38,787 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:39,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:39,807 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:40,486 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:40,828 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:41,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:41,847 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:42,527 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:42,868 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:43,546 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:43,888 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:44,567 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:44,908 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:45,586 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:45,928 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:46,606 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:46,948 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:47,626 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:47,968 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:48,647 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:49,666 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:49,842 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:50,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:50,857 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:51,706 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:51,877 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:52,633 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:21:52,726 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:52,899 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:53,746 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:53,917 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:54,767 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:54,937 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:55,787 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:55,958 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:56,806 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:56,977 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:57,826 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:57,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:58,846 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:59,017 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:21:59,866 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:00,038 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:00,886 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:01,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:01,906 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:02,078 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:02,926 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:03,098 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:03,946 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:04,118 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:04,966 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:05,138 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:05,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:06,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:07,006 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:07,178 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:08,027 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:08,198 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:09,045 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:09,218 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:10,066 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:10,238 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:11,086 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:11,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:12,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:12,278 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:13,126 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:13,298 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:13,672 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:22:14,146 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:14,318 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:15,166 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:15,339 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:16,186 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:16,358 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:17,207 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:17,379 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:18,227 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:18,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:19,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:19,419 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:20,266 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:20,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:21,287 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:21,459 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:22,306 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:22,479 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:23,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:23,499 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:24,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:24,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:25,366 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:25,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:26,387 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:26,559 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:27,407 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:27,579 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:28,426 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:28,599 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:29,447 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:29,619 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:30,467 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:30,639 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:31,486 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:31,659 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:32,506 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:32,679 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:33,526 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:33,699 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:34,546 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:34,712 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:22:34,719 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:35,566 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:35,739 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:36,591 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:36,758 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:37,606 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:37,779 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:38,626 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:38,799 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:39,647 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:39,819 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:40,667 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:40,839 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:40,891 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node 2025-09-23T01:22:41,277 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower Node1 2025-09-23T01:22:41,682 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader Before Follower Restart 2025-09-23T01:22:41,687 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:41,859 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:42,130 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Follower Restart 2025-09-23T01:22:42,508 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Follower From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Follower From Cluster Node 2025-09-23T01:22:42,706 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:42,874 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Restart 2025-09-23T01:22:42,878 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:43,726 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:43,899 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:44,746 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:44,920 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:45,766 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:45,939 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:46,787 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:46,959 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:47,806 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:48,019 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:48,826 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:49,040 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:49,846 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:50,060 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:50,866 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:51,079 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:51,886 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:52,100 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:52,907 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:53,119 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:53,927 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:54,139 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:54,946 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:55,160 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:55,753 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:22:55,966 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:56,180 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:56,986 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:57,200 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:58,006 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:58,220 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:59,026 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:22:59,240 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:00,047 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:00,260 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:01,065 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:01,280 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:02,086 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:02,300 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:03,106 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:03,320 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:04,126 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:04,340 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:05,146 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:05,360 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:06,166 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:06,380 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:06,424 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-32-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}]} timed out after 120.016117279 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-32-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}]} timed out after 120.016117279 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-32-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.12}]} timed out after 120.016117279 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:23:07,186 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:07,400 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:08,206 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:08,420 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:09,226 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:09,440 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:10,247 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:10,460 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:11,267 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:11,480 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:12,286 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:13,306 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:14,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:14,443 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:14,444 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2194 millis 2025-09-23T01:23:15,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:15,461 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:16,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:16,480 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:16,792 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:23:17,386 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:17,501 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:18,406 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:18,520 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:19,426 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:19,540 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:20,446 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:20,560 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:21,466 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:21,580 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:22,486 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:22,600 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:23,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:23,620 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:24,527 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:24,640 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:25,546 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:25,661 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:26,566 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:26,681 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:27,586 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:27,700 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:28,605 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:28,720 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:29,626 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:29,740 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:30,646 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:30,761 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:31,666 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:31,780 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:32,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:32,800 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:33,053 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-33-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.016666539 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-33-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.016666539 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-33-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.7}]} timed out after 120.016666539 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:23:33,707 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:33,820 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:34,737 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:34,840 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:35,756 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:35,861 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:36,776 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:36,881 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:37,796 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:37,833 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:23:37,901 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:38,817 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:38,921 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:39,837 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:39,941 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:40,856 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:40,961 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:41,876 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:41,981 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:42,896 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:43,000 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:43,916 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:44,020 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:44,936 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:45,041 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:45,956 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:46,061 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:46,976 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:47,081 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:47,996 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:48,101 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:49,017 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$as], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:49,121 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:50,036 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:50,141 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:51,056 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:51,161 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:52,076 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:52,181 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:53,096 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:53,202 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:54,116 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:54,222 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:55,136 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:55,242 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:56,156 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:56,262 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:57,176 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:57,282 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:58,196 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:58,302 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:58,872 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:23:59,217 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:23:59,322 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:00,236 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:00,342 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:01,256 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:01,362 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:02,275 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:02,382 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:03,296 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:03,402 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:04,317 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:04,422 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:05,336 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:05,441 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:06,356 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:06,462 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:07,376 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:07,482 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:08,396 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:08,502 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:09,416 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:09,522 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:10,436 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:10,542 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:11,456 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:11,563 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:12,476 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:12,583 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:13,496 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:13,602 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:14,516 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:14,623 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:15,536 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$As], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:15,642 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:16,556 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:16,663 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:17,576 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:17,683 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:18,596 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:18,702 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:19,616 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:19,723 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:19,912 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:24:20,636 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:20,742 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:21,666 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:21,763 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:22,686 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:22,792 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:23,707 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:23,812 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:24,726 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:24,833 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:25,747 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:25,853 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:26,766 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:26,873 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:27,785 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:27,893 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:28,806 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:28,913 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:29,826 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:29,933 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:30,846 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:30,953 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:31,867 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:31,973 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:32,886 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:32,992 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$by], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:33,906 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:34,013 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:34,926 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:35,033 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:35,946 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:36,053 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:36,966 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:37,073 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:37,986 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:38,093 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:39,006 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:39,113 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:40,026 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:40,133 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:40,952 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:24:41,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:41,154 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:42,066 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:42,173 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:43,086 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:43,193 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:44,106 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:44,214 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$my], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:45,126 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:45,234 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:46,146 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:46,254 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:47,166 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:47,274 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:48,186 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:49,206 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:50,226 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:50,784 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:50,784 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2890 millis 2025-09-23T01:24:51,246 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:51,804 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:52,265 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:52,824 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:53,285 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:53,843 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:54,305 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$at], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:54,863 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:55,327 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:55,883 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:56,345 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:56,904 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:57,366 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:57,924 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:58,386 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:58,944 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:59,406 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:24:59,994 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:00,426 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:01,014 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:01,447 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:01,993 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:25:02,034 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$By], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:02,466 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$it], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:03,055 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:03,486 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:04,075 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:04,506 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:05,095 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:05,527 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:06,114 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:06,454 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-34-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}]} timed out after 120.025780249 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-34-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}]} timed out after 120.025780249 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-34-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.13}]} timed out after 120.025780249 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:25:06,547 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:07,134 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:07,566 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:08,155 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:08,586 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:09,175 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:09,606 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:10,194 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:10,626 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:11,214 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:11,647 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:12,235 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:12,666 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$st], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:13,255 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$My], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:13,686 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:14,274 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:14,706 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:15,295 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:15,725 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:16,314 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:16,746 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:17,334 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:17,766 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:18,355 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:18,786 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:19,375 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:19,806 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:20,395 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:20,826 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$At], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:21,415 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:21,845 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:22,435 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:22,865 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:23,032 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-44 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:25:23,455 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:23,886 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:24,475 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:24,906 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:25,495 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:25,925 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:26,515 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:26,946 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:27,535 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:27,966 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:28,555 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:28,986 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$It], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:29,575 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:30,006 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:30,596 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:31,026 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:31,616 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:32,045 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:32,636 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:33,066 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:33,083 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-35-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.026026814 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-35-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.026026814 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-35-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.8}]} timed out after 120.026026814 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:25:33,655 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:34,086 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:34,675 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:35,106 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:35,695 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:36,126 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:36,715 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:37,146 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:37,736 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:38,166 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:38,756 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:39,187 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$St], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:39,776 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$az], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:40,207 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:40,796 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:41,226 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:41,816 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:42,246 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:42,836 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:43,266 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:43,856 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ez], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:44,073 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:25:44,286 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:44,876 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:45,306 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:45,896 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:46,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:46,916 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:47,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:47,936 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:48,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:48,956 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:49,386 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:49,976 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:50,406 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:50,996 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:51,426 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:52,016 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:52,446 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:53,037 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:53,466 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:54,056 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:54,486 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:55,077 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:55,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:56,096 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:56,526 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:57,117 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:57,547 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:58,137 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:58,566 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:59,157 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:25:59,586 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:00,177 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:00,606 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:01,197 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:01,626 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:02,217 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:02,646 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:03,237 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:03,666 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:04,257 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:04,686 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:05,113 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:26:05,277 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:05,705 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:06,298 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Az], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:06,726 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:07,317 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:07,746 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:08,337 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:08,766 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:09,358 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:09,786 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:10,378 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ez], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:10,806 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:11,398 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:11,826 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:12,417 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:12,846 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:13,438 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:13,866 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:14,458 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:14,885 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:15,478 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:15,906 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:16,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:16,926 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:17,517 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:17,946 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:18,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:18,966 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:19,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:19,985 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:20,578 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Oz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:21,006 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:21,598 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:22,026 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:22,618 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:23,046 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:23,638 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:24,066 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:24,659 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:25,086 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:25,679 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:26,106 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:26,152 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-6 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:26:26,698 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:27,126 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:27,719 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:28,146 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:28,739 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:29,166 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:29,771 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:30,186 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:30,789 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:31,206 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:31,809 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:32,226 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:32,829 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:33,246 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:33,848 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:34,266 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:34,869 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:35,286 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:35,888 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:36,306 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:36,910 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:37,326 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:37,929 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:38,346 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:38,949 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:39,366 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:39,969 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:40,386 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:40,990 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:41,405 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:42,009 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:42,426 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:43,030 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:43,446 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:44,049 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:44,481 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:45,069 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:45,496 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:46,089 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:46,516 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:47,109 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:47,192 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:26:47,536 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:48,130 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:48,557 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:49,149 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:49,576 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:50,170 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:50,596 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:51,190 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:51,616 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:52,210 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:52,637 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:53,230 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:53,656 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:54,250 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:54,677 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:55,271 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:55,696 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:56,290 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:56,716 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:57,341 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:57,736 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:58,360 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:58,756 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:59,380 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:26:59,777 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:00,400 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:00,796 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:01,421 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:01,816 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:02,441 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:02,836 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:03,461 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:03,856 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:04,482 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:04,876 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:05,511 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:05,896 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:06,484 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-36-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}]} timed out after 120.026080785 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-36-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}]} timed out after 120.026080785 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-36-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.14}]} timed out after 120.026080785 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:27:06,531 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:06,916 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:07,550 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:07,936 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:08,233 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:27:08,571 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:08,956 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:09,591 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:09,976 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:10,611 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:10,996 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:11,631 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$AA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:12,016 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:12,651 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$BA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:13,035 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:13,671 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$CA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:14,056 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:14,692 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$DA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:15,076 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:15,711 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$EA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:16,096 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:16,731 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$FA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:17,116 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:17,751 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$GA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:18,137 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:18,773 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$HA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:19,156 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:19,791 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$IA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:20,176 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:20,812 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$JA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:21,196 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:21,831 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$KA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:22,216 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:22,852 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$LA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:23,236 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:23,871 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$MA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:24,255 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:24,892 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$NA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:25,276 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:25,912 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$OA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:26,296 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:26,932 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$PA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:27,316 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:27,952 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$QA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:28,336 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:28,972 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$RA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:29,272 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:27:29,357 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:29,993 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$SA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:30,376 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:31,013 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$TA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:31,396 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:32,032 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$UA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:32,416 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:33,052 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$VA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:33,113 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-37-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.026832563 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-37-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.026832563 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-37-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.9}]} timed out after 120.026832563 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:27:33,436 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:34,072 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$WA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:34,456 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:35,092 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$XA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:35,476 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:36,112 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$YA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:36,496 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:37,132 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ZA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:37,517 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:38,153 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:38,537 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:39,173 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:39,556 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:40,192 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:40,576 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:41,213 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:41,596 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:42,233 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:42,616 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:43,253 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:43,636 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:44,273 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:44,656 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:45,293 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:45,676 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:46,313 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:46,696 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:47,333 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:47,716 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:48,353 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:48,736 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:49,373 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:49,756 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:50,313 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:27:50,394 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:50,776 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:51,414 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:51,796 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:52,433 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:52,816 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:53,453 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:53,836 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:54,474 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:54,856 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:55,494 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:55,876 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:56,514 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:56,897 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:57,535 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:57,916 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:58,554 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:58,937 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:59,574 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:27:59,956 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:00,594 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:00,976 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:01,613 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:01,996 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:02,635 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:03,016 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:03,655 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:04,036 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:04,674 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:05,056 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:05,695 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:06,077 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:06,714 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:07,097 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:07,735 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:08,116 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:08,754 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:09,136 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:09,774 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:10,156 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:10,795 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:11,177 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:11,352 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:28:11,814 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:12,196 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:12,835 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:13,218 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:13,855 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:14,237 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:14,875 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:15,256 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:15,895 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:16,276 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:16,915 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$AB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:17,296 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:17,935 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$BB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:18,316 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:18,955 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$CB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:19,337 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:19,975 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$DB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:20,356 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:20,995 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$EB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:21,376 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:22,015 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$FB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:22,396 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:23,036 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$GB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:23,419 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:24,055 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$HB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:24,437 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:25,086 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$IB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:25,457 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:26,106 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$JB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:26,476 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:27,125 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$KB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:27,496 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:28,146 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$LB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:28,516 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:29,165 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$MB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:29,536 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:30,186 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$NB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:30,556 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:31,205 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$OB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:31,576 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:32,226 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$PB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:32,392 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:28:32,596 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:33,246 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$QB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:33,616 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:33,712 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Follower Restart 2025-09-23T01:28:34,266 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$RB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:34,636 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:35,286 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$SB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:35,656 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:36,306 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$TB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:36,676 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:37,326 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$UB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:37,696 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:38,346 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$VB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:38,716 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:39,366 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$WB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:39,736 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:40,385 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$XB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:40,756 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:41,406 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$YB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:41,776 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:42,427 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ZB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:42,796 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:43,447 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:43,816 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:44,466 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:44,836 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:45,487 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:45,856 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:46,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:46,877 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:47,527 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:47,896 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:48,547 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:48,917 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:49,567 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:49,936 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:50,587 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:50,956 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:51,607 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:51,976 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:52,627 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:52,996 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:53,433 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:28:53,647 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:54,016 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:54,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:55,036 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:55,687 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:56,056 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:56,707 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:57,076 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:57,727 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:58,096 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:58,748 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:59,115 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:28:59,768 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:00,136 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:00,787 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:01,156 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:01,808 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:02,176 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:02,827 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:03,196 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:03,847 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:04,217 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:04,868 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:05,236 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:05,888 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:06,256 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:06,513 | ERROR | ForkJoinPool-10-worker-1 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-38-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}]} timed out after 120.024912227 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-38-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}]} timed out after 120.024912227 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-38-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=0}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.0.15}]} timed out after 120.024912227 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:29:06,908 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:07,276 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:07,928 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:08,296 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:08,948 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:09,316 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:09,968 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:10,336 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:10,988 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:11,356 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:12,008 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:12,376 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:13,029 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:13,396 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:14,049 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:14,416 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:14,462 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-46 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:29:15,069 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:15,436 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:16,089 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:16,456 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:17,108 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:17,476 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:18,128 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:18,496 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:19,148 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:19,517 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:20,169 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:20,536 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:21,188 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:21,556 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:22,209 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$AC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:22,576 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:23,228 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$BC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:23,596 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:24,249 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$CC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:24,616 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:25,269 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$DC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:25,636 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:26,289 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$EC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:26,656 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:27,309 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$FC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:27,676 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:28,329 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$GC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:28,696 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:29,349 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$HC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:29,716 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:30,369 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$IC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:30,735 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:31,389 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$JC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:31,756 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:32,409 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$KC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:32,777 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:33,144 | ERROR | ForkJoinPool-10-worker-2 | FlowReader | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error java.util.concurrent.ExecutionException: ReadFailedException{message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}], errorList=[RpcError [message=Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-39-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.02688259 seconds. The backend for inventory is not available.]]} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:292) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:255) ~[bundleFile:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[bundleFile:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.readFlowsX(FlowReader.java:86) [bundleFile:?] at org.opendaylight.openflowplugin.applications.bulk.o.matic.FlowReader.run(FlowReader.java:67) [bundleFile:?] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.mdsal.common.api.ReadFailedException: Error reading data for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-39-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.02688259 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.recordFailedResponse(RemoteProxyTransaction.java:206) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.failReadFuture(RemoteProxyTransaction.java:221) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.completeRead(RemoteProxyTransaction.java:244) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$doRead$1(RemoteProxyTransaction.java:138) ~[?:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ReadTransactionRequest{target=member-1-datastore-config-fe-0-txn-39-1, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1829272738], snapshotOnly=true, path=/(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/(urn:opendaylight:flow:inventory?revision=2013-08-19)table/table[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=1}]/flow/flow[{(urn:opendaylight:flow:inventory?revision=2013-08-19)id=Flow-openflow:1.1.10}]} timed out after 120.02688259 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[?:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[?:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T01:29:33,429 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$LC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:33,796 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:34,450 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$MC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:34,817 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:35,470 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$NC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:35,502 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:29:35,836 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:36,490 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$OC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:36,859 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:37,509 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$PC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:37,877 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:38,530 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$QC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:38,897 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:39,549 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$RC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:39,916 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:40,569 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$SC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:40,937 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:41,590 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$TC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:41,956 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:42,610 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$UC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:42,976 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:43,630 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$VC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:43,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:44,650 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$WC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:45,015 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:45,670 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$XC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:46,036 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:46,690 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$YC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:47,057 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:47,710 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ZC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:48,076 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:48,731 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:49,097 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:49,750 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:50,116 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:50,770 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:51,136 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:51,791 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:52,156 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:52,810 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:53,176 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:53,831 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:54,196 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:54,851 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:55,216 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:55,871 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:56,236 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:56,543 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-33 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:29:56,890 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:57,256 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:57,911 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:58,276 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:58,931 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:59,296 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:29:59,951 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:00,316 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:00,971 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$aD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:01,335 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:01,991 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$bD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:02,356 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:03,011 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:03,377 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:04,031 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:04,396 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:05,051 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$eD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:05,417 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:06,071 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:06,436 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:07,092 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:07,456 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$Zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:08,111 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:08,477 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$0x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:09,132 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:09,496 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$1x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:10,152 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:10,517 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$2x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:11,171 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$kD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:11,537 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$3x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:12,191 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$lD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:12,556 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$4x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:13,211 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$mD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:13,576 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$5x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:14,231 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$nD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:14,596 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$6x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:15,075 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Follower Restart 2025-09-23T01:30:15,253 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:15,466 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node 2025-09-23T01:30:15,616 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$7x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:15,831 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node 2025-09-23T01:30:16,226 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node After Follower Restart 2025-09-23T01:30:16,273 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$pD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:16,637 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$8x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:17,292 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:17,583 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:30:17,656 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$9x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:18,313 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$rD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:18,676 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$+x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:18,683 | INFO | sshd-SshServer[7a5ec7bb](port=8101)-nio2-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.170.213:40456 authenticated 2025-09-23T01:30:19,332 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$sD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:19,466 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/030__Cluster_HA_Data_Recovery_BulkFlow_Single_Switch.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/030__Cluster_HA_Data_Recovery_BulkFlow_Single_Switch.robot 2025-09-23T01:30:19,697 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$~x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:19,830 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Check Shards Status And Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Check Shards Status And Initialize Variables 2025-09-23T01:30:20,353 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$tD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:20,717 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:21,373 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$uD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:21,736 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$by], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:22,392 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$vD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:22,757 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:23,412 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$wD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:23,437 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before Cluster Restart 2025-09-23T01:30:23,776 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:24,432 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$xD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:24,797 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:25,453 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$yD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:25,817 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:26,473 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$zD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:26,837 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:27,493 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$AD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:27,856 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:28,515 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$BD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:28,876 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:29,533 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$CD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:29,896 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:30,552 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$DD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:30,916 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:31,573 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ED], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:31,936 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:32,593 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$FD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:32,956 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$my], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:33,613 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$GD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:33,915 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node1 2025-09-23T01:30:33,976 | INFO | opendaylight-cluster-data-shard-dispatcher-30 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:34,284 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower 2025-09-23T01:30:34,633 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$HD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:34,637 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster 2025-09-23T01:30:34,996 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:35,031 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Cluster Restart 2025-09-23T01:30:35,401 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill All Cluster Nodes" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill All Cluster Nodes 2025-09-23T01:30:35,654 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$ID], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:36,017 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:36,753 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$JD], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-09-23T01:30:37,037 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/temp/_user_shardmanager-config_member-1-shard-inventory-config$qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. Sep 23, 2025 1:31:13 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Trying to lock /tmp/karaf-0.23.0/lock Sep 23, 2025 1:31:13 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Lock acquired Sep 23, 2025 1:31:13 AM org.apache.karaf.main.Main$KarafLockCallback lockAcquired INFO: Lock acquired. Setting startlevel to 100 2025-09-23T01:31:14,844 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.logging]) | EventAdminConfigurationNotifier | 4 - org.ops4j.pax.logging.pax-logging-log4j2 - 2.2.8 | Logging configuration changed. (Event Admin service unavailable - no notification sent). 2025-09-23T01:31:14,891 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.cm/1.3.2 has been started 2025-09-23T01:31:14,909 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.core/1.10.3 has been started 2025-09-23T01:31:15,016 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Starting JMX OSGi agent 2025-09-23T01:31:15,032 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112] for service with service.id [15] 2025-09-23T01:31:15,033 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112] for service with service.id [40] 2025-09-23T01:31:15,058 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Starting with globalExtender setting: false 2025-09-23T01:31:15,061 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Version = 2.2.6 2025-09-23T01:31:15,221 | INFO | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | Setting java.rmi.server.hostname system property to 127.0.0.1 2025-09-23T01:31:15,370 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,374 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,375 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,375 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,375 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,376 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,376 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@300a9bba with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=348d871e-29c7-47bf-9e2b-ac609f40a112 2025-09-23T01:31:15,382 | INFO | activator-1-thread-2 | Activator | 99 - org.apache.karaf.deployer.features - 4.4.7 | Deployment finished. Registering FeatureDeploymentListener 2025-09-23T01:31:15,386 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-09-23T01:31:15,487 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.bundle.core/4.4.7 2025-09-23T01:31:15,496 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.config.command/4.4.7 2025-09-23T01:31:15,566 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.deployer.kar/4.4.7 2025-09-23T01:31:15,567 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.4.7 2025-09-23T01:31:15,578 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.features.command/4.4.7 2025-09-23T01:31:15,581 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.http.core/4.4.7. Missing service: [org.apache.karaf.http.core.ProxyService] 2025-09-23T01:31:15,587 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.instance.core/4.4.7 2025-09-23T01:31:15,596 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T01:31:15,597 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T01:31:15,598 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-09-23T01:31:15,600 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.kar.core/4.4.7 2025-09-23T01:31:15,604 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-09-23T01:31:15,605 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.package.core/4.4.7 2025-09-23T01:31:15,607 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.service.core/4.4.7 2025-09-23T01:31:15,613 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-09-23T01:31:15,614 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-09-23T01:31:15,616 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Activator | 120 - org.apache.karaf.shell.core - 4.4.7 | Not starting local console. To activate set karaf.startLocalConsole=true 2025-09-23T01:31:15,658 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.karaf.shell.core/4.4.7 has been started 2025-09-23T01:31:15,697 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.4.7. Missing service: [org.apache.sshd.server.SshServer] 2025-09-23T01:31:15,735 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.system.core/4.4.7 2025-09-23T01:31:15,758 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.web.core/4.4.7. Missing service: [org.apache.karaf.web.WebContainerService] 2025-09-23T01:31:15,791 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Activator | 392 - org.ops4j.pax.web.pax-web-extender-war - 8.0.30 | Configuring WAR extender thread pool. Pool size = 3 2025-09-23T01:31:15,900 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.ssh/4.4.7 2025-09-23T01:31:15,908 | INFO | activator-1-thread-1 | DefaultIoServiceFactoryFactory | 125 - org.apache.sshd.osgi - 2.14.0 | No detected/configured IoServiceFactoryFactory; using Nio2ServiceFactoryFactory 2025-09-23T01:31:15,951 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Activator | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | Starting Pax Web Whiteboard Extender 2025-09-23T01:31:15,989 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | log | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Logging initialized @3367ms to org.eclipse.jetty.util.log.Slf4jLog 2025-09-23T01:31:16,013 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.web]) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because configuration has changed 2025-09-23T01:31:16,014 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | EventAdmin support enabled, WAB events will be posted to EventAdmin topics. 2025-09-23T01:31:16,014 | INFO | paxweb-config-3-thread-1 (change config) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because ServerControllerFactory has been registered 2025-09-23T01:31:16,014 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Pax Web Runtime started 2025-09-23T01:31:16,039 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Starting BlueprintBundleTracker 2025-09-23T01:31:16,049 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.cm_1.3.2 [78] was successfully created 2025-09-23T01:31:16,059 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.core_1.10.3 [79] was successfully created 2025-09-23T01:31:16,071 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.karaf.shell.core_4.4.7 [120] was successfully created 2025-09-23T01:31:16,093 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Configuring server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-09-23T01:31:16,093 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Configuring JettyServerController{configuration=da4bbc8a-2465-4196-9af9-234310d36f60,state=UNCONFIGURED} 2025-09-23T01:31:16,094 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating Jetty server instance using configuration properties. 2025-09-23T01:31:16,110 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Processing Jetty configuration from files: [etc/jetty.xml] 2025-09-23T01:31:16,231 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Found configured connector "jetty-default": 0.0.0.0:8181 2025-09-23T01:31:16,232 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Using configured jetty-default@ed0f1ea{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} as non secure connector for address: 0.0.0.0:8181 2025-09-23T01:31:16,233 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Eagerly starting Jetty thread pool QueuedThreadPool[qtp1298175459]@4d6095e3{STOPPED,0<=0<=200,i=0,r=-1,q=0}[NO_TRY] 2025-09-23T01:31:16,246 | INFO | paxweb-config-3-thread-1 (change controller) | JettyFactory | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding JMX support to Jetty server 2025-09-23T01:31:16,269 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Starting server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-09-23T01:31:16,269 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting JettyServerController{configuration=da4bbc8a-2465-4196-9af9-234310d36f60,state=STOPPED} 2025-09-23T01:31:16,270 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Server@4a8de9ec{STOPPED}[9.4.57.v20241219] 2025-09-23T01:31:16,272 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 21.0.5+11-Ubuntu-1ubuntu122.04 2025-09-23T01:31:16,295 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | DefaultSessionIdManager workerName=node0 2025-09-23T01:31:16,295 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | No SessionScavenger set, using defaults 2025-09-23T01:31:16,298 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | node0 Scavenging every 600000ms 2025-09-23T01:31:16,337 | INFO | paxweb-config-3-thread-1 (change controller) | AbstractConnector | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started jetty-default@ed0f1ea{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} 2025-09-23T01:31:16,337 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started @3726ms 2025-09-23T01:31:16,339 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpService factory 2025-09-23T01:31:16,341 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.http.core_4.4.7 [105]] 2025-09-23T01:31:16,350 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.web.core_4.4.7 [124]] 2025-09-23T01:31:16,353 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.http.core/4.4.7 2025-09-23T01:31:16,357 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.web.core/4.4.7 2025-09-23T01:31:16,361 | INFO | HttpService->WarExtender (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-war_8.0.30 [392]] 2025-09-23T01:31:16,363 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard_8.0.30 [393]] 2025-09-23T01:31:16,362 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpServiceRuntime 2025-09-23T01:31:16,370 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-09-23T01:31:16,371 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)}", size=2} 2025-09-23T01:31:16,371 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-09-23T01:31:16,404 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jdbc.core/4.4.7 2025-09-23T01:31:16,407 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)} to o.o.p.w.s.j.i.PaxWebServletContextHandler@4ce2aae2{/,null,STOPPED} 2025-09-23T01:31:16,410 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@4ce2aae2{/,null,STOPPED} 2025-09-23T01:31:16,514 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.jolokia.osgi_1.7.2 [155]] 2025-09-23T01:31:16,524 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@16423ec1,contexts=[{HS,OCM-5,context:74288752,/}]} 2025-09-23T01:31:16,525 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@16423ec1,contexts=null}", size=3} 2025-09-23T01:31:16,525 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{HS,id=OCM-5,name='context:74288752',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:74288752',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@46d8e70}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@4ce2aae2{/,null,STOPPED} 2025-09-23T01:31:16,526 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@4ce2aae2{/,null,STOPPED} 2025-09-23T01:31:16,526 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@16423ec1,contexts=[{HS,OCM-5,context:74288752,/}]} 2025-09-23T01:31:16,530 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/" with default Osgi Context OsgiContextModel{HS,id=OCM-5,name='context:74288752',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:74288752',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@46d8e70}} 2025-09-23T01:31:16,547 | INFO | paxweb-config-3-thread-1 | osgi | 155 - org.jolokia.osgi - 1.7.2 | No access restrictor found, access to any MBean is allowed 2025-09-23T01:31:16,573 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@4ce2aae2{/,null,AVAILABLE} 2025-09-23T01:31:16,574 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{HS,id=OCM-5,name='context:74288752',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:74288752',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@46d8e70}}} as OSGi service for "/" context path 2025-09-23T01:31:16,653 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-09-23T01:31:16,676 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CustomFilterAdapterConfigurationImpl | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Custom filter properties updated: {service.pid=org.opendaylight.aaa.filterchain, osgi.ds.satisfying.condition.target=(osgi.condition.id=true), customFilterList=, component.name=org.opendaylight.aaa.filterchain.configuration.impl.CustomFilterAdapterConfigurationImpl, felix.fileinstall.filename=file:/tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg, component.id=4, Filter.target=(org.opendaylight.aaa.filterchain.filter=true)} 2025-09-23T01:31:16,700 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.AuthenticationService), (objectClass=org.opendaylight.aaa.api.IIDMStore)] 2025-09-23T01:31:16,714 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.aaa.shiro_0.21.0 [172]] 2025-09-23T01:31:16,716 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-09-23T01:31:16,716 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]}", size=1} 2025-09-23T01:31:16,716 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-09-23T01:31:16,724 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.IIDMStore)] 2025-09-23T01:31:16,732 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.IIDMStore)] 2025-09-23T01:31:16,772 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | FileAkkaConfigurationReader | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | File-based Pekko configuration reader enabled 2025-09-23T01:31:16,792 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider starting 2025-09-23T01:31:17,051 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | ActorSystemProviderImpl | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating new ActorSystem 2025-09-23T01:31:17,351 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Slf4jLogger | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Slf4jLogger started 2025-09-23T01:31:17,585 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ArteryTransport | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Remoting started with transport [Artery tcp]; listening on address [pekko://opendaylight-cluster-data@10.30.170.97:2550] with UID [4931998005175290066] 2025-09-23T01:31:17,595 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Starting up, Pekko version [1.0.3] ... 2025-09-23T01:31:17,642 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Registered cluster JMX MBean [pekko:type=Cluster] 2025-09-23T01:31:17,643 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Started up successfully 2025-09-23T01:31:17,689 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR started. Config: strategy [KeepMajority], stable-after [7 seconds], down-all-when-unstable [5250 milliseconds], selfUniqueAddress [pekko://opendaylight-cluster-data@10.30.170.97:2550#4931998005175290066], selfDc [default]. 2025-09-23T01:31:17,900 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider started 2025-09-23T01:31:17,907 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | FileModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Shard configuration provider started 2025-09-23T01:31:17,925 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4. Missing service: [org.opendaylight.infrautils.diagstatus.DiagStatusServiceMBean] 2025-09-23T01:31:18,041 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.150:2550], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.150/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:31:18,061 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.150:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.150/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:31:18,092 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | ThreadFactory created: SystemReadyService 2025-09-23T01:31:18,094 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | Now starting to provide full system readiness status updates (see TestBundleDiag's logs)... 2025-09-23T01:31:18,094 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DiagStatusServiceImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service started 2025-09-23T01:31:18,095 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos() started... 2025-09-23T01:31:18,100 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | MBeanUtils | 198 - org.opendaylight.infrautils.diagstatus-api - 7.1.4 | MBean registration for org.opendaylight.infrautils.diagstatus:type=SvcStatus SUCCESSFUL. 2025-09-23T01:31:18,100 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DiagStatusServiceMBeanImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service management started 2025-09-23T01:31:18,101 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-09-23T01:31:18,117 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.api.openflow.mastership.MastershipChangeServiceManager), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,125 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon#1930213783]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:31:18,126 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1859882270]], but this node is not initialized yet 2025-09-23T01:31:18,139 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-09-23T01:31:18,145 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0. Missing service: [org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager] 2025-09-23T01:31:18,152 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.NotificationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,190 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationServiceFactory)] 2025-09-23T01:31:18,198 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-09-23T01:31:18,199 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,200 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,204 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | ReconciliationManager started 2025-09-23T01:31:18,205 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-09-23T01:31:18,205 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,210 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | MessageIntelligenceAgencyImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Registered MBean org.opendaylight.openflowplugin.impl.statistics.ofpspecific:type=MessageIntelligenceAgencyMXBean 2025-09-23T01:31:18,212 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.impl/0.20.0 2025-09-23T01:31:18,235 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,235 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:18,237 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OpenflowServiceRecoveryHandlerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Registering openflowplugin service recovery handlers 2025-09-23T01:31:18,241 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0. Missing service: [org.opendaylight.serviceutils.srm.spi.RegistryControl, org.opendaylight.mdsal.binding.api.DataBroker] 2025-09-23T01:31:18,246 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | SimpleBindingDOMCodecFactory | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Binding/DOM Codec enabled 2025-09-23T01:31:18,251 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activating 2025-09-23T01:31:18,253 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activated 2025-09-23T01:31:18,259 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DefaultBindingRuntimeGenerator | 328 - org.opendaylight.yangtools.binding-generator - 14.0.14 | Binding/YANG type support activated 2025-09-23T01:31:18,272 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activating 2025-09-23T01:31:18,273 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activated 2025-09-23T01:31:18,323 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime starting 2025-09-23T01:31:18,351 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | KarafFeaturesSupport | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Will attempt to integrate with Karaf FeaturesService 2025-09-23T01:31:18,885 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | NettyTransportSupport | 284 - org.opendaylight.netconf.transport-api - 9.0.0 | Netty transport backed by epoll(2) 2025-09-23T01:31:19,116 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | SharedEffectiveModelContextFactory | 379 - org.opendaylight.yangtools.yang-parser-impl - 14.0.14 | Using weak references 2025-09-23T01:31:19,139 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#316344871]], but this node is not initialized yet 2025-09-23T01:31:19,193 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoinNack message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon#2052116099]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:31:19,229 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] is JOINING itself (with roles [member-1, dc-default], version [0.0.0]) and forming new cluster 2025-09-23T01:31:19,232 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - is the new leader among reachable nodes (more leaders may exist) 2025-09-23T01:31:19,253 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Up] 2025-09-23T01:31:19,269 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist). 2025-09-23T01:31:21,187 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiModuleInfoSnapshotImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | EffectiveModelContext generation 1 activated 2025-09-23T01:31:21,189 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | DOM Schema services activated 2025-09-23T01:31:21,189 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | Updating context to generation 1 2025-09-23T01:31:21,194 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DOMRpcRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM RPC/Action router started 2025-09-23T01:31:21,202 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service starting 2025-09-23T01:31:21,205 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service started 2025-09-23T01:31:21,380 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-33 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage MAPPED 2025-09-23T01:31:22,066 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingRuntimeContextImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | BindingRuntimeContext generation 1 activated 2025-09-23T01:31:22,090 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBindingDOMCodecServicesImpl | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec generation 1 activated 2025-09-23T01:31:22,091 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | GlobalBindingDOMCodecServices | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Global Binding/DOM Codec activated with generation 1 2025-09-23T01:31:22,097 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiDatastoreContextIntrospectorFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore Context Introspector activated 2025-09-23T01:31:22,100 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION starting 2025-09-23T01:31:22,393 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : config 2025-09-23T01:31:22,394 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T01:31:22,395 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T01:31:22,402 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-config 2025-09-23T01:31:22,421 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-config 2025-09-23T01:31:22,432 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Recovery complete 2025-09-23T01:31:22,458 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.97:2550 2025-09-23T01:31:22,458 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-default-config 2025-09-23T01:31:22,459 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-topology-config 2025-09-23T01:31:22,459 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-09-23T01:31:22,459 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-09-23T01:31:22,499 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store config is using tell-based protocol 2025-09-23T01:31:22,504 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T01:31:22,505 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-default-config 2025-09-23T01:31:22,507 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-topology-config 2025-09-23T01:31:22,507 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-09-23T01:31:22,508 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-09-23T01:31:22,515 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-09-23T01:31:22,516 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL starting 2025-09-23T01:31:22,517 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : operational 2025-09-23T01:31:22,517 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-operational 2025-09-23T01:31:22,520 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-config: Shard created, persistent : true 2025-09-23T01:31:22,525 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-topology-config: Shard created, persistent : true 2025-09-23T01:31:22,526 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-config: Shard created, persistent : true 2025-09-23T01:31:22,527 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-operational 2025-09-23T01:31:22,529 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-toaster-config: Shard created, persistent : true 2025-09-23T01:31:22,540 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Recovery complete 2025-09-23T01:31:22,542 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.97:2550 2025-09-23T01:31:22,543 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-09-23T01:31:22,543 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-09-23T01:31:22,543 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-09-23T01:31:22,543 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-09-23T01:31:22,545 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store operational is using tell-based protocol 2025-09-23T01:31:22,552 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-09-23T01:31:22,552 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-topology-config/member-1-shard-topology-config-notifier#-764031478 created and ready for shard:member-1-shard-topology-config 2025-09-23T01:31:22,553 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-toaster-config/member-1-shard-toaster-config-notifier#1157134269 created and ready for shard:member-1-shard-toaster-config 2025-09-23T01:31:22,553 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-default-operational: Shard created, persistent : false 2025-09-23T01:31:22,559 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-topology-operational: Shard created, persistent : false 2025-09-23T01:31:22,560 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiBlockingBindingNormalizer | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter activated 2025-09-23T01:31:22,560 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-inventory-config/member-1-shard-inventory-config-notifier#1553263515 created and ready for shard:member-1-shard-inventory-config 2025-09-23T01:31:22,562 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-config/member-1-shard-default-config/member-1-shard-default-config-notifier#-1455503314 created and ready for shard:member-1-shard-default-config 2025-09-23T01:31:22,562 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-default-operational/member-1-shard-default-operational-notifier#-221841659 created and ready for shard:member-1-shard-default-operational 2025-09-23T01:31:22,562 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-topology-operational/member-1-shard-topology-operational-notifier#1368085235 created and ready for shard:member-1-shard-topology-operational 2025-09-23T01:31:22,553 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-09-23T01:31:22,566 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Starting recovery with journal batch size 1 2025-09-23T01:31:22,567 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Starting recovery with journal batch size 1 2025-09-23T01:31:22,567 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for MountPointService activated 2025-09-23T01:31:22,569 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Starting recovery with journal batch size 1 2025-09-23T01:31:22,572 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-09-23T01:31:22,572 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-09-23T01:31:22,573 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-inventory-operational: Shard created, persistent : false 2025-09-23T01:31:22,574 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-inventory-operational/member-1-shard-inventory-operational-notifier#435244188 created and ready for shard:member-1-shard-inventory-operational 2025-09-23T01:31:22,574 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-1-shard-toaster-operational: Shard created, persistent : false 2025-09-23T01:31:22,575 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.97:2550/user/shardmanager-operational/member-1-shard-toaster-operational/member-1-shard-toaster-operational-notifier#676579349 created and ready for shard:member-1-shard-toaster-operational 2025-09-23T01:31:22,575 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Starting recovery with journal batch size 1 2025-09-23T01:31:22,569 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Starting recovery with journal batch size 1 2025-09-23T01:31:22,580 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Starting recovery with journal batch size 1 2025-09-23T01:31:22,581 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Starting recovery with journal batch size 1 2025-09-23T01:31:22,582 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-46 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage DISK 2025-09-23T01:31:22,581 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Starting recovery with journal batch size 1 2025-09-23T01:31:22,585 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DOMNotificationRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM Notification Router started 2025-09-23T01:31:22,593 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:22,594 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationService activated 2025-09-23T01:31:22,597 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationPublishService activated 2025-09-23T01:31:22,597 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:22,600 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:22,600 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-09-23T01:31:22,601 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcService activated 2025-09-23T01:31:22,602 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T01:31:22,610 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcProviderService activated 2025-09-23T01:31:22,645 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: journal open: applyTo=0 2025-09-23T01:31:22,645 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: journal open: applyTo=0 2025-09-23T01:31:22,647 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: journal open: applyTo=0 2025-09-23T01:31:22,652 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: journal open: applyTo=0 2025-09-23T01:31:22,652 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: journal open: applyTo=0 2025-09-23T01:31:22,653 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: journal open: applyTo=0 2025-09-23T01:31:22,653 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: journal open: applyTo=1 2025-09-23T01:31:22,657 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: journal open: applyTo=79 2025-09-23T01:31:22,695 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,696 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,696 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,699 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,702 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:22,703 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-09-23T01:31:22,707 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , received role change from null to Follower 2025-09-23T01:31:22,711 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from null to Follower 2025-09-23T01:31:22,711 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton manager starting singleton actor [pekko://opendaylight-cluster-data/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-09-23T01:31:22,712 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | ClusterSingletonManager state change [Start -> Oldest] 2025-09-23T01:31:22,713 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionService activated 2025-09-23T01:31:22,716 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T01:31:22,718 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , received role change from null to Follower 2025-09-23T01:31:22,719 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T01:31:22,721 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,721 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from null to Follower 2025-09-23T01:31:22,722 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T01:31:22,723 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,723 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,723 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-09-23T01:31:22,723 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from null to Follower 2025-09-23T01:31:22,724 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from null to Follower 2025-09-23T01:31:22,724 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , received role change from null to Follower 2025-09-23T01:31:22,724 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T01:31:22,725 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T01:31:22,725 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , received role change from null to Follower 2025-09-23T01:31:22,725 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from null to Follower 2025-09-23T01:31:22,725 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-toaster-operational from null to Follower 2025-09-23T01:31:22,726 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-09-23T01:31:22,726 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-default-operational from null to Follower 2025-09-23T01:31:22,724 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from null to Follower 2025-09-23T01:31:22,727 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-topology-config from null to Follower 2025-09-23T01:31:22,727 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T01:31:22,727 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-inventory-config from null to Follower 2025-09-23T01:31:22,727 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionProviderService activated 2025-09-23T01:31:22,728 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | DynamicBindingAdapter | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | 8 DOMService trackers started 2025-09-23T01:31:22,729 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-09-23T01:31:22,731 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-09-23T01:31:22,735 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | GlobalBindingRuntimeContext | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Global BindingRuntimeContext generation 1 activated 2025-09-23T01:31:22,736 | INFO | Start Level: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime started 2025-09-23T01:31:22,791 | INFO | Framework Event Dispatcher: Equinox Container: 348d871e-29c7-47bf-9e2b-ac609f40a112 | Main | 3 - org.ops4j.pax.logging.pax-logging-api - 2.2.8 | Karaf started in 8s. Bundle stats: 399 active, 400 total 2025-09-23T01:31:22,825 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , received role change from null to Follower 2025-09-23T01:31:22,826 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-default-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-09-23T01:31:22,826 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-default-config from null to Follower 2025-09-23T01:31:23,693 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-09-23T01:31:31,214 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#316344871]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:31:31,215 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#316344871]] (version [1.0.3]) 2025-09-23T01:31:31,289 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.171.150:2550] is JOINING, roles [member-3, dc-default], version [0.0.0] 2025-09-23T01:31:31,296 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-1748809657] was unhandled. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:31:31,297 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#191170312] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:31:31,390 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1859882270]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:31:31,390 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#1859882270]] (version [1.0.3]) 2025-09-23T01:31:31,973 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.150:2550] to [Up] 2025-09-23T01:31:31,974 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.171.150:2550 2025-09-23T01:31:31,975 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config 2025-09-23T01:31:31,976 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-topology-config 2025-09-23T01:31:31,976 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-09-23T01:31:31,976 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-09-23T01:31:31,976 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-3-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config 2025-09-23T01:31:31,977 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-3-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-09-23T01:31:31,977 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-3-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-topology-config 2025-09-23T01:31:31,977 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-3-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-09-23T01:31:31,977 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.171.150:2550 2025-09-23T01:31:31,978 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-09-23T01:31:31,978 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-09-23T01:31:31,978 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-3-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-09-23T01:31:31,979 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-09-23T01:31:31,979 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-09-23T01:31:31,979 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-3-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-09-23T01:31:31,979 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-3-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-09-23T01:31:31,980 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-3-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-09-23T01:31:32,028 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.171.168:2550] is JOINING, roles [member-2, dc-default], version [0.0.0] 2025-09-23T01:31:32,030 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-1748809657] was unhandled. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:31:32,031 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#191170312] was unhandled. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:31:32,741 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-inventory-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,741 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-toaster-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,741 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-inventory-config, lastLogIndex=2, lastLogTerm=2}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,749 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-topology-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,749 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-toaster-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,750 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-topology-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,752 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-default-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,790 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-default-operational status sync done false 2025-09-23T01:31:32,790 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-inventory-config status sync done false 2025-09-23T01:31:32,791 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-toaster-operational status sync done false 2025-09-23T01:31:32,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-topology-config status sync done false 2025-09-23T01:31:32,791 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@1a49fe90 2025-09-23T01:31:32,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-inventory-config status sync done true 2025-09-23T01:31:32,792 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@62969de9 2025-09-23T01:31:32,792 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@531deb7c 2025-09-23T01:31:32,793 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7e1e347d 2025-09-23T01:31:32,793 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@49e4ca30 2025-09-23T01:31:32,793 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-topology-operational status sync done false 2025-09-23T01:31:32,793 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@5da67e6a 2025-09-23T01:31:32,793 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-inventory-operational status sync done false 2025-09-23T01:31:32,794 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-toaster-config status sync done false 2025-09-23T01:31:32,794 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@5a400cac 2025-09-23T01:31:32,814 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-2-shard-default-config, lastLogIndex=78, lastLogTerm=2}" message is greater than follower's term 2 - updating term 2025-09-23T01:31:32,829 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6ee8d3f4 2025-09-23T01:31:32,830 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-default-config status sync done false 2025-09-23T01:31:32,834 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-default-config status sync done true 2025-09-23T01:31:32,993 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.168:2550] to [Up] 2025-09-23T01:31:32,994 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:31:32,994 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:31:32,994 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:31:32,994 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:31:32,995 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:31:32,995 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:31:32,995 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:31:32,995 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-09-23T01:31:32,995 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:31:32,996 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:31:32,997 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:31:32,997 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:31:32,997 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:31:32,997 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-09-23T01:31:33,010 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type CONFIGURATION activated 2025-09-23T01:31:33,011 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION started 2025-09-23T01:31:33,011 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type OPERATIONAL activated 2025-09-23T01:31:33,026 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiClusterAdmin | 193 - org.opendaylight.controller.sal-cluster-admin-impl - 11.0.0 | Cluster Admin services started 2025-09-23T01:31:33,049 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ConcurrentDOMDataBroker | 358 - org.opendaylight.yangtools.util - 14.0.14 | ThreadFactory created: CommitFutures 2025-09-23T01:31:33,055 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | DataBrokerCommitExecutor | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker commit exector started 2025-09-23T01:31:33,059 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ConcurrentDOMDataBroker | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker started 2025-09-23T01:31:33,065 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-09-23T01:31:33,066 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for DataBroker activated 2025-09-23T01:31:33,117 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:31:33,119 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-default-operational status sync done true 2025-09-23T01:31:33,149 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [Initial app config AaaCertServiceConfig] 2025-09-23T01:31:33,150 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiPasswordServiceConfigBootstrap | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | Listening for password service configuration 2025-09-23T01:31:33,151 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.api.IIDMStore)] 2025-09-23T01:31:33,253 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), Initial app config ShiroConfiguration, (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T01:31:33,255 | ERROR | opendaylight-cluster-data-notification-dispatcher-51 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | bundle org.opendaylight.aaa.idm-store-h2:0.21.0 (167)[org.opendaylight.aaa.datastore.h2.H2Store(5)] : Constructor argument 0 in class class org.opendaylight.aaa.datastore.h2.H2Store has unsupported type org.opendaylight.aaa.datastore.h2.ConnectionProvider 2025-09-23T01:31:33,263 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | EOSClusterSingletonServiceProvider | 257 - org.opendaylight.mdsal.mdsal-singleton-impl - 14.0.13 | Cluster Singleton Service started 2025-09-23T01:31:33,267 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default iteration count=20000 2025-09-23T01:31:33,267 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default algorithm=SHA-512 2025-09-23T01:31:33,267 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will not utilize a private salt, since none was configured 2025-09-23T01:31:33,271 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | AAAEncryptionServiceImpl | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | AAAEncryptionService activated 2025-09-23T01:31:33,275 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Encryption Service enabled 2025-09-23T01:31:33,276 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | H2 IDMStore activated 2025-09-23T01:31:33,278 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer registered 2025-09-23T01:31:33,282 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-09-23T01:31:33,287 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration] 2025-09-23T01:31:33,291 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-09-23T01:31:33,292 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:31:33,303 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 12.04 ms 2025-09-23T01:31:33,337 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:31:33,347 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:31:33,352 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ArbitratorReconciliationManagerImpl | 296 - org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl - 0.20.0 | ArbitratorReconciliationManager has started successfully. 2025-09-23T01:31:33,372 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), Initial app config ForwardingRulesManagerConfig] 2025-09-23T01:31:33,377 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-09-23T01:31:33,377 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:31:33,382 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 4.764 ms 2025-09-23T01:31:33,392 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=0, txSequence=1, message=ModifyTransactionSuccess{target=member-1-datastore-config-fe-1-txn-0-0, sequence=1}} found, ignoring response 2025-09-23T01:31:33,392 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=0, txSequence=3, message=ModifyTransactionSuccess{target=member-1-datastore-config-fe-1-txn-1-0, sequence=1}} found, ignoring response 2025-09-23T01:31:33,393 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=0, txSequence=5, message=ModifyTransactionSuccess{target=member-1-datastore-config-fe-1-txn-2-0, sequence=1}} found, ignoring response 2025-09-23T01:31:33,417 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [Initial app config LldpSpeakerConfig] 2025-09-23T01:31:33,437 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | DeviceOwnershipService started 2025-09-23T01:31:33,458 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-09-23T01:31:33,463 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-09-23T01:31:33,465 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T01:31:33,538 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | DefaultConfigPusher | 301 - org.opendaylight.openflowplugin.applications.of-switch-config-pusher - 0.20.0 | DefaultConfigPusher has started. 2025-09-23T01:31:33,552 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [Initial app config TopologyLldpDiscoveryConfig, (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T01:31:33,560 | INFO | Blueprint Extender: 1 | LLDPSpeaker | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | LLDPSpeaker started, it will send LLDP frames each 5 seconds 2025-09-23T01:31:33,573 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-09-23T01:31:33,597 | INFO | Blueprint Extender: 3 | AaaCertMdsalProvider | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCertMdsalProvider Initialized 2025-09-23T01:31:33,613 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | FlowCapableTopologyProvider | 304 - org.opendaylight.openflowplugin.applications.topology-manager - 0.20.0 | Topology Manager service started. 2025-09-23T01:31:33,662 | INFO | Blueprint Extender: 1 | NodeConnectorInventoryEventTranslator | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | NodeConnectorInventoryEventTranslator has started. 2025-09-23T01:31:33,667 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 has been started 2025-09-23T01:31:33,671 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.lldp-speaker_0.20.0 [300] was successfully created 2025-09-23T01:31:33,701 | INFO | Blueprint Extender: 3 | LazyBindingList | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for lists larger than 16 element(s) 2025-09-23T01:31:33,732 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading properties from '(urn:opendaylight:params:xml:ns:yang:openflow:provider:config?revision=2016-05-10)openflow-provider-config' YANG file 2025-09-23T01:31:33,734 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | rpc-requests-quota configuration property was changed to '20000' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | global-notification-quota configuration property was changed to '64000' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | switch-features-mandatory configuration property was changed to 'false' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | enable-flow-removed-notification configuration property was changed to 'true' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-rpc-enabled configuration property was changed to 'false' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-count-limit configuration property was changed to '25600' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-interval-timeout-limit configuration property was changed to '500' 2025-09-23T01:31:33,735 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | echo-reply-timeout configuration property was changed to '2000' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-table-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-flow-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-group-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-meter-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-port-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-queue-statistics-polling-on configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | skip-table-features configuration property was changed to 'true' 2025-09-23T01:31:33,736 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | basic-timer-delay configuration property was changed to '3000' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | maximum-timer-delay configuration property was changed to '900000' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | use-single-layer-serialization configuration property was changed to 'true' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-min-threads configuration property was changed to '1' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-max-threads configuration property was changed to '32000' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-timeout configuration property was changed to '60' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-rate-limit-per-min configuration property was changed to '0' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-hold-time-in-seconds configuration property was changed to '0' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-datastore-removal-delay configuration property was changed to '500' 2025-09-23T01:31:33,737 | INFO | Blueprint Extender: 2 | OSGiConfigurationServiceFactory | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading configuration from 'org.opendaylight.openflowplugin' configuration file 2025-09-23T01:31:33,738 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:31:33,739 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-inventory-operational status sync done true 2025-09-23T01:31:33,742 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | MD-SAL configuration-based SwitchConnectionProviders started 2025-09-23T01:31:33,748 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:31:33,749 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-topology-operational status sync done true 2025-09-23T01:31:33,749 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-default-impl' 2025-09-23T01:31:33,762 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | felix.fileinstall.filename configuration property was changed to 'file:/tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg' 2025-09-23T01:31:33,762 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | service.pid configuration property was changed to 'org.opendaylight.openflowplugin' 2025-09-23T01:31:33,767 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | Certificate Manager service has been initialized 2025-09-23T01:31:33,778 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCert Rpc Service has been initialized 2025-09-23T01:31:33,780 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 has been started 2025-09-23T01:31:33,783 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.cert_0.21.0 [163] was successfully created 2025-09-23T01:31:33,798 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:31:33,799 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-topology-config status sync done true 2025-09-23T01:31:33,805 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-09-23T01:31:33,806 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:31:33,806 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:31:33,807 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-toaster-config status sync done true 2025-09-23T01:31:33,807 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-toaster-operational status sync done true 2025-09-23T01:31:33,840 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.topology.lldp.LLDPLinkAger@4c9cdd89 was registered as configuration listener to OpenFlowPlugin configuration service 2025-09-23T01:31:33,855 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] 2025-09-23T01:31:33,856 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.frm.impl.ForwardingRulesManagerImpl@4ec290bb was registered as configuration listener to OpenFlowPlugin configuration service 2025-09-23T01:31:33,860 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] 2025-09-23T01:31:33,868 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] already present 2025-09-23T01:31:33,869 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] already present 2025-09-23T01:31:33,869 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL started 2025-09-23T01:31:33,907 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | YangLibraryWriter | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer started with modules-state enabled 2025-09-23T01:31:33,911 | INFO | Blueprint Extender: 3 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | Starting LLDPActivator with lldpSecureKey: aa9251f8-c7c0-4322-b8d6-c3a84593bda3 2025-09-23T01:31:33,912 | INFO | Blueprint Extender: 3 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | LLDPDiscoveryListener started. 2025-09-23T01:31:33,913 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 has been started 2025-09-23T01:31:33,915 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery_0.20.0 [303] was successfully created 2025-09-23T01:31:33,940 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-legacy-impl' 2025-09-23T01:31:33,975 | INFO | Blueprint Extender: 3 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Checking if default entries must be created in IDM store 2025-09-23T01:31:34,068 | INFO | Blueprint Extender: 1 | ForwardingRulesManagerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | ForwardingRulesManager has started successfully. 2025-09-23T01:31:34,077 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 has been started 2025-09-23T01:31:34,081 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager_0.20.0 [299] was successfully created 2025-09-23T01:31:34,114 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T01:31:34,115 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:31:34,116 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 1.001 ms 2025-09-23T01:31:34,150 | INFO | Blueprint Extender: 2 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | OpenFlowPluginProvider started, waiting for onSystemBootReady() 2025-09-23T01:31:34,151 | INFO | Blueprint Extender: 2 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@7ffa49c6 2025-09-23T01:31:34,151 | INFO | Blueprint Extender: 2 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@7c23b239 2025-09-23T01:31:34,157 | INFO | Blueprint Extender: 2 | OnfExtensionProvider | 308 - org.opendaylight.openflowplugin.extension-onf - 0.20.0 | ONF Extension Provider started. 2025-09-23T01:31:34,158 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 has been started 2025-09-23T01:31:34,159 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.impl_0.20.0 [309] was successfully created 2025-09-23T01:31:34,234 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-09-23T01:31:34,235 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:31:34,323 | INFO | Blueprint Extender: 3 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Found default domain in IDM store, skipping insertion of default data 2025-09-23T01:31:34,325 | INFO | Blueprint Extender: 3 | AAAShiroProvider | 172 - org.opendaylight.aaa.shiro - 0.21.0 | AAAShiroProvider Session Initiated 2025-09-23T01:31:34,445 | INFO | Blueprint Extender: 3 | IniSecurityManagerFactory | 171 - org.opendaylight.aaa.repackaged-shiro - 0.21.0 | Realms have been explicitly set on the SecurityManager instance - auto-setting of realms will not occur. 2025-09-23T01:31:34,520 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-09-23T01:31:34,521 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=300, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}", size=2} 2025-09-23T01:31:34,521 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-09-23T01:31:34,523 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=300, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@4f788351{/auth,null,STOPPED} 2025-09-23T01:31:34,524 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@4f788351{/auth,null,STOPPED} 2025-09-23T01:31:34,526 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-09-23T01:31:34,528 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-09-23T01:31:34,528 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T01:31:34,530 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/auth" with default Osgi Context OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=300, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} 2025-09-23T01:31:34,530 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.aaa.shiro_0.21.0 [172] registered context path /auth with 4 service(s) 2025-09-23T01:31:34,533 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-09-23T01:31:34,537 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-09-23T01:31:34,537 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@4f788351{/auth,null,AVAILABLE} 2025-09-23T01:31:34,543 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=300, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}} as OSGi service for "/auth" context path 2025-09-23T01:31:34,544 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T01:31:34,545 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-13,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-09-23T01:31:34,545 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-13,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-09-23T01:31:34,545 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T01:31:34,546 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T01:31:34,546 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-09-23T01:31:34,546 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=1} 2025-09-23T01:31:34,550 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-09-23T01:31:34,559 | ERROR | Blueprint Extender: 3 | MdsalRestconfServer | 279 - org.opendaylight.netconf.restconf-server-mdsal - 9.0.0 | bundle org.opendaylight.netconf.restconf-server-mdsal:9.0.0 (279)[org.opendaylight.restconf.server.mdsal.MdsalRestconfServer(69)] : Constructor argument 5 in class class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer has unsupported type [Lorg.opendaylight.restconf.server.spi.RpcImplementation; 2025-09-23T01:31:34,645 | INFO | Blueprint Extender: 3 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278]] 2025-09-23T01:31:34,646 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-09-23T01:31:34,646 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=312, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}", size=2} 2025-09-23T01:31:34,646 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-09-23T01:31:34,647 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=312, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@4acdf35d{/rests,null,STOPPED} 2025-09-23T01:31:34,648 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@4acdf35d{/rests,null,STOPPED} 2025-09-23T01:31:34,650 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-09-23T01:31:34,650 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-09-23T01:31:34,650 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T01:31:34,650 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T01:31:34,650 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/rests" with default Osgi Context OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=312, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} 2025-09-23T01:31:34,651 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-09-23T01:31:34,651 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-09-23T01:31:34,651 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@4acdf35d{/rests,null,AVAILABLE} 2025-09-23T01:31:34,652 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=312, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}} as OSGi service for "/rests" context path 2025-09-23T01:31:34,652 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T01:31:34,653 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-21,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-09-23T01:31:34,653 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-21,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-09-23T01:31:34,653 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T01:31:34,653 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T01:31:34,653 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T01:31:34,655 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-09-23T01:31:34,655 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=1} 2025-09-23T01:31:34,655 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-09-23T01:31:34,655 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /rests with 4 service(s) 2025-09-23T01:31:34,656 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-25,contextPath='/.well-known'} 2025-09-23T01:31:34,656 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=317, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}", size=2} 2025-09-23T01:31:34,656 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-25,contextPath='/.well-known'} 2025-09-23T01:31:34,657 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=317, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@713c8f4{/.well-known,null,STOPPED} 2025-09-23T01:31:34,657 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@713c8f4{/.well-known,null,STOPPED} 2025-09-23T01:31:34,658 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-09-23T01:31:34,658 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]}", size=2} 2025-09-23T01:31:34,659 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-09-23T01:31:34,659 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-09-23T01:31:34,659 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-09-23T01:31:34,659 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/.well-known" with default Osgi Context OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=317, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} 2025-09-23T01:31:34,659 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@713c8f4{/.well-known,null,AVAILABLE} 2025-09-23T01:31:34,660 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=317, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}} as OSGi service for "/.well-known" context path 2025-09-23T01:31:34,660 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-09-23T01:31:34,661 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-09-23T01:31:34,661 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]}", size=1} 2025-09-23T01:31:34,661 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-09-23T01:31:34,661 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /.well-known with 3 service(s) 2025-09-23T01:31:34,663 | INFO | Blueprint Extender: 3 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | Binding URL provider org.opendaylight.restconf.server.jaxrs.JaxRsYangLibrary@4a7e0360 2025-09-23T01:31:34,723 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 2025-09-23T01:31:34,724 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 2025-09-23T01:31:34,725 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 2025-09-23T01:31:34,726 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 2025-09-23T01:31:34,754 | INFO | Blueprint Extender: 3 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with service class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer 2025-09-23T01:31:34,754 | INFO | Blueprint Extender: 3 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with base path: /restconf, default encoding: JSON, default pretty print: false 2025-09-23T01:31:34,803 | INFO | Blueprint Extender: 3 | OSGiNorthbound | 275 - org.opendaylight.netconf.restconf-nb - 9.0.0 | Global RESTCONF northbound pools started 2025-09-23T01:31:34,805 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 has been started 2025-09-23T01:31:34,806 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.shiro_0.21.0 [172] was successfully created 2025-09-23T01:31:35,592 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos: Elapsed time 17s, remaining time 282s, diag: Active {INSTALLED=0, RESOLVED=10, UNKNOWN=0, GRACE_PERIOD=0, WAITING=0, STARTING=0, ACTIVE=399, STOPPING=0, FAILURE=0} 2025-09-23T01:31:35,593 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] 2025-09-23T01:31:35,593 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | Now notifying all its registered SystemReadyListeners... 2025-09-23T01:31:35,593 | INFO | SystemReadyService-0 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | onSystemBootReady() received, starting the switch connections 2025-09-23T01:31:35,720 | INFO | epollEventLoopGroup-4-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6633 2025-09-23T01:31:35,720 | INFO | epollEventLoopGroup-2-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6653 2025-09-23T01:31:35,721 | INFO | epollEventLoopGroup-4-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6633 2025-09-23T01:31:35,721 | INFO | epollEventLoopGroup-2-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6653 2025-09-23T01:31:35,721 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@7c23b239 started 2025-09-23T01:31:35,721 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@7ffa49c6 started 2025-09-23T01:31:35,721 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | All switchConnectionProviders are up and running (2). 2025-09-23T01:31:43,940 | INFO | qtp1298175459-421 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication is now enabled 2025-09-23T01:31:43,940 | INFO | qtp1298175459-421 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication Manager activated 2025-09-23T01:31:45,282 | INFO | qtp1298175459-421 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-09-23T01:31:45,286 | INFO | qtp1298175459-421 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-09-23T01:31:45,506 | INFO | qtp1298175459-421 | ApiPathParser | 273 - org.opendaylight.netconf.restconf-api - 9.0.0 | Consecutive slashes in REST URLs will be rejected 2025-09-23T01:31:49,096 | INFO | sshd-SshServer[e245f73](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.170.213:43340 authenticated 2025-09-23T01:31:49,731 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Cluster Restart 2025-09-23T01:32:33,332 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:32:33,335 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 0 2025-09-23T01:32:33,341 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-09-23T01:32:33,341 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:32:33,346 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 4.531 ms 2025-09-23T01:35:00,035 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.150:2550: 2780 millis 2025-09-23T01:37:43,056 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node1 2025-09-23T01:37:43,619 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Cluster Restart 2025-09-23T01:37:44,149 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node1 2025-09-23T01:37:44,614 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node1 2025-09-23T01:37:45,039 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster 2025-09-23T01:37:45,469 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Leader Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Leader Before Leader Restart 2025-09-23T01:37:46,718 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Leader 2025-09-23T01:37:50,049 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Leader 2025-09-23T01:37:50,069 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:37:50,102 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:37:50,342 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:37:50,464 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Leader Restart 2025-09-23T01:37:50,755 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-09-23T01:37:50,969 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Leader Restart 2025-09-23T01:39:33,415 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Leader From Cluster Node 2025-09-23T01:39:33,759 | INFO | pipe-log:log "ROBOT MESSAGE: Killing ODL2 10.30.171.168" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Killing ODL2 10.30.171.168 2025-09-23T01:39:37,708 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node 2025-09-23T01:39:38,895 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as UNREACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T01:39:38,901 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:38,901 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members, waiting for stable-after = 7000 ms before taking downing decision. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T01:39:45.901513548Z. 2025-09-23T01:39:38,902 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:39:38,902 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 0 2025-09-23T01:39:38,903 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:38,904 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:39:38,904 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: refreshing backend for shard 0 2025-09-23T01:39:38,905 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:39:38,906 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: refreshing backend for shard 1 2025-09-23T01:39:39,012 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-09-23T01:39:42,003 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,004 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.remote.rpc.registry.gossip.GossipStatus] from Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#191170312] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,004 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.remote.rpc.registry.gossip.GossipStatus] from Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-1748809657] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [7] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,004 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [8] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,005 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [9] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,006 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.remote.rpc.registry.gossip.GossipStatus] from Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#191170312] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [10] dead letters encountered, no more dead letters will be logged in next [5.000 min]. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:39:42,030 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:39:44,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Follower): Leader pekko://opendaylight-cluster-data@10.30.171.168:2550 is unreachable 2025-09-23T01:39:44,043 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate): Starting new election term 4 2025-09-23T01:39:44,043 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-09-23T01:39:44,044 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6100c1d2 2025-09-23T01:39:44,044 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from Follower to Candidate 2025-09-23T01:39:44,046 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from Follower to Candidate 2025-09-23T01:39:44,121 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config (Follower): Term 4 in "RequestVote{term=4, candidateId=member-3-shard-default-config, lastLogIndex=144, lastLogTerm=3}" message is greater than follower's term 3 - updating term 2025-09-23T01:39:44,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@4a54ee3b 2025-09-23T01:39:44,139 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-default-config status sync done false 2025-09-23T01:39:44,143 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-default-config status sync done true 2025-09-23T01:39:44,146 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config#-925976125], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-09-23T01:39:44,147 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config#-925976125], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:39:44,147 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config#384960998], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-default-config#-925976125], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 487.9 μs 2025-09-23T01:39:44,371 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config (Follower): Term 4 in "RequestVote{term=4, candidateId=member-3-shard-inventory-config, lastLogIndex=7, lastLogTerm=3}" message is greater than follower's term 3 - updating term 2025-09-23T01:39:44,380 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational (Follower): Term 4 in "RequestVote{term=4, candidateId=member-3-shard-toaster-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 3 - updating term 2025-09-23T01:39:44,383 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@1cfa1e8 2025-09-23T01:39:44,383 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config (Follower): Term 4 in "RequestVote{term=4, candidateId=member-3-shard-topology-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 3 - updating term 2025-09-23T01:39:44,384 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-inventory-config status sync done false 2025-09-23T01:39:44,386 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-inventory-config status sync done true 2025-09-23T01:39:44,388 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-toaster-operational status sync done false 2025-09-23T01:39:44,388 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@65cdb12d 2025-09-23T01:39:44,389 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational (Follower): Term 4 in "RequestVote{term=4, candidateId=member-3-shard-default-operational, lastLogIndex=41, lastLogTerm=3}" message is greater than follower's term 3 - updating term 2025-09-23T01:39:44,389 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@57771a9e 2025-09-23T01:39:44,390 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-topology-config status sync done false 2025-09-23T01:39:44,394 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-default-operational status sync done false 2025-09-23T01:39:44,394 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@21d92b9e 2025-09-23T01:39:44,395 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-default-operational status sync done true 2025-09-23T01:39:44,396 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational#-1777997171], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-09-23T01:39:44,397 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational#-1777997171], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-09-23T01:39:44,397 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational#1985543090], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-default-operational#-1777997171], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 337.1 μs 2025-09-23T01:39:44,401 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Follower): Leader pekko://opendaylight-cluster-data@10.30.171.168:2550 is unreachable 2025-09-23T01:39:44,404 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Candidate): Starting new election term 4 2025-09-23T01:39:44,404 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-09-23T01:39:44,404 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from Follower to Candidate 2025-09-23T01:39:44,405 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@2850eda9 2025-09-23T01:39:44,405 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from Follower to Candidate 2025-09-23T01:39:44,412 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Follower): Leader pekko://opendaylight-cluster-data@10.30.171.168:2550 is unreachable 2025-09-23T01:39:44,414 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Candidate): Starting new election term 4 2025-09-23T01:39:44,415 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-09-23T01:39:44,415 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@44fe7e65 2025-09-23T01:39:44,415 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from Follower to Candidate 2025-09-23T01:39:44,415 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from Follower to Candidate 2025-09-23T01:39:44,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 4 2025-09-23T01:39:44,418 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@3b7d447a 2025-09-23T01:39:44,418 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-toaster-config , received role change from Candidate to Leader 2025-09-23T01:39:44,419 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-1-shard-toaster-config from Candidate to Leader 2025-09-23T01:39:44,419 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-09-23T01:39:44,422 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 4 2025-09-23T01:39:44,422 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@5e57259 2025-09-23T01:39:44,424 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-topology-operational , received role change from Candidate to Leader 2025-09-23T01:39:44,425 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-topology-operational from Candidate to Leader 2025-09-23T01:39:44,426 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 2 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#550989355], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present} 2025-09-23T01:39:44,426 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=2} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=2, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#550989355], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present}} 2025-09-23T01:39:44,433 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=2} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=2, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#550989355], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=2, shard=topology, dataTree=present}} in 6.303 ms 2025-09-23T01:39:44,907 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-toaster-operational status sync done true 2025-09-23T01:39:44,907 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-1-shard-topology-config status sync done true 2025-09-23T01:39:45,938 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR took decision DownUnreachable and is downing [pekko://opendaylight-cluster-data@10.30.171.168:2550], [1] unreachable of [3] members, all members in DC [Member(pekko://opendaylight-cluster-data@10.30.170.97:2550, Up), Member(pekko://opendaylight-cluster-data@10.30.171.150:2550, Up), Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)], full reachability status: [pekko://opendaylight-cluster-data@10.30.170.97:2550 -> pekko://opendaylight-cluster-data@10.30.171.168:2550: Unreachable [Unreachable] (1), pekko://opendaylight-cluster-data@10.30.171.150:2550 -> pekko://opendaylight-cluster-data@10.30.171.168:2550: Unreachable [Unreachable] (1)] 2025-09-23T01:39:45,939 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR is downing [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,7765856557135676946)] 2025-09-23T01:39:45,939 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking unreachable node [pekko://opendaylight-cluster-data@10.30.171.168:2550] as [Down] 2025-09-23T01:39:45,940 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members changed during stable-after period. Resetting timer. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T01:39:52.940410232Z. 2025-09-23T01:39:45,961 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_RETAINED_WITH_NO_CHANGE [wasOwner=true, isOwner=true, hasOwner=true] 2025-09-23T01:39:46,963 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is removing unreachable node [pekko://opendaylight-cluster-data@10.30.171.168:2550] 2025-09-23T01:39:46,966 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberRemoved: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:46,966 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberRemoved: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:46,970 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Association | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Association to [pekko://opendaylight-cluster-data@10.30.171.168:2550] with UID [7765856557135676946] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down] 2025-09-23T01:39:49,621 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:39:50,302 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Leader from Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Leader from Cluster Node 2025-09-23T01:39:50,524 | INFO | pipe-log:log "ROBOT MESSAGE: Starting ODL2 10.30.171.168" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting ODL2 10.30.171.168 2025-09-23T01:39:54,095 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate): Starting new election term 5 2025-09-23T01:39:57,293 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-858587118]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:39:57,294 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-858587118]] (version [1.0.3]) 2025-09-23T01:39:58,162 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.168:2550] to [Up] 2025-09-23T01:39:58,163 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:58,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:39:58,164 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:39:58,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:39:58,165 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:39:58,166 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:39:58,166 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:39:58,166 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:39:58,924 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-inventory-operational currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-09-23T01:40:01,751 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:40:01,752 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:40:01,836 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): handleAppendEntriesReply - received unsuccessful reply: AppendEntriesReply{term=4, success=false, followerId=member-2-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, leader snapshotIndex: 4, snapshotTerm: 3, replicatedToAllIndex: -1 2025-09-23T01:40:01,837 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): follower member-2-shard-topology-operational appears to be behind the leader from the last snapshot - updated: matchIndex: -1, nextIndex: 0 2025-09-23T01:40:01,837 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): Initiating install snapshot to follower member-2-shard-topology-operational: follower nextIndex: 0, leader snapshotIndex: 4, leader lastIndex: 6, leader log size: 2 2025-09-23T01:40:01,844 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Initiating snapshot capture CaptureSnapshot [lastAppliedIndex=6, lastAppliedTerm=4, lastIndex=6, lastTerm=4, installSnapshotInitiated=, replicatedToAllIndex=-1, replicatedToAllTerm=-1, unAppliedEntries size=0, mandatoryTrim=false] to install on member-2-shard-topology-operational 2025-09-23T01:40:01,856 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Persising snapshot at EntryInfo[index=6, term=4]/EntryInfo[index=6, term=4] 2025-09-23T01:40:01,857 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Removed in-memory snapshotted entries, adjusted snapshotIndex: 4 and term: 3 2025-09-23T01:40:01,872 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: snapshot is durable as of 2025-09-23T01:40:01.857532825Z 2025-09-23T01:40:01,927 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): Snapshot successfully installed on follower member-2-shard-topology-operational (last chunk 1) - matchIndex set to 6, nextIndex set to 7 2025-09-23T01:40:02,261 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:40:04,201 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate): Term 6 in "RequestVote{term=6, candidateId=member-3-shard-inventory-operational, lastLogIndex=364, lastLogTerm=3}" message is greater than Candidate's term 5 - switching to Follower 2025-09-23T01:40:04,207 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 6 2025-09-23T01:40:04,208 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-1-shard-inventory-operational , received role change from Candidate to Follower 2025-09-23T01:40:04,208 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-1-shard-inventory-operational from Candidate to Follower 2025-09-23T01:40:04,209 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7623337f 2025-09-23T01:40:04,210 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-09-23T01:40:04,210 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-inventory-operational status sync done false 2025-09-23T01:40:04,217 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-1-shard-inventory-operational status sync done true 2025-09-23T01:40:04,217 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational#782202587], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T01:40:04,217 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational#782202587], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:40:04,218 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-operational: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1589978976], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-operational/member-3-shard-inventory-operational#782202587], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 744.6 μs 2025-09-23T01:40:13,590 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Leader Restart 2025-09-23T01:41:10,107 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2053 millis 2025-09-23T01:41:14,821 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2687 millis 2025-09-23T01:46:06,299 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Leader 2025-09-23T01:46:09,615 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Leader Restart 2025-09-23T01:46:09,722 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:46:09,961 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:46:10,270 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-09-23T01:47:50,109 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node After Leader Restart 2025-09-23T01:47:50,712 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:47:50,792 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:47:51,297 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:47:52,931 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Leader Node 2025-09-23T01:47:53,344 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Leader Restart 2025-09-23T01:47:53,781 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before follower Restart 2025-09-23T01:47:54,937 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node2 2025-09-23T01:47:57,763 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower Node2 2025-09-23T01:47:58,183 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Follower Restart 2025-09-23T01:47:58,241 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:47:58,401 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:47:58,461 | INFO | opendaylight-cluster-data-notification-dispatcher-64 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-09-23T01:47:58,644 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Follower Restart 2025-09-23T01:49:39,392 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as UNREACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T01:49:39,393 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:39,394 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members, waiting for stable-after = 7000 ms before taking downing decision. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T01:49:46.393850777Z. 2025-09-23T01:49:39,393 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:40,988 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Follower Node2 2025-09-23T01:49:42,625 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:42,626 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:42,626 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:42,627 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:42,899 | INFO | pipe-log:log "ROBOT MESSAGE: Killing ODL2 10.30.171.168" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Killing ODL2 10.30.171.168 2025-09-23T01:49:43,317 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:43,474 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as REACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T01:49:44,499 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received ReachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:44,500 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found all unreachable members healed during stable-after period, no downing decision necessary for now. 2025-09-23T01:49:44,500 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received ReachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:49:44,501 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:49:44,502 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:49:46,820 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2 and Exit" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2 and Exit 2025-09-23T01:49:48,571 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as UNREACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T01:49:48,572 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members, waiting for stable-after = 7000 ms before taking downing decision. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T01:49:55.572473143Z. 2025-09-23T01:49:48,572 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:48,573 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:51,138 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:49:51,141 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [90] dead letters encountered, of which 79 were not logged. The counter will be reset now. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [2] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [3] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-1-shard-toaster-config#436704851] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [4] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-1-shard-topology-operational#550989355] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,142 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-181737551] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [7] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,143 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.GossipEnvelope] from Actor[pekko://opendaylight-cluster-data/system/cluster/core/daemon#-459105915] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [8] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,143 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-1-shard-toaster-config#436704851] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [9] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:51,143 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.remote.rpc.registry.gossip.GossipStatus] from Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-1748809657] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [10] dead letters encountered, no more dead letters will be logged in next [5.000 min]. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-09-23T01:49:55,662 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR took decision DownUnreachable and is downing [pekko://opendaylight-cluster-data@10.30.171.168:2550], [1] unreachable of [3] members, all members in DC [Member(pekko://opendaylight-cluster-data@10.30.170.97:2550, Up), Member(pekko://opendaylight-cluster-data@10.30.171.150:2550, Up), Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)], full reachability status: [pekko://opendaylight-cluster-data@10.30.170.97:2550 -> pekko://opendaylight-cluster-data@10.30.171.168:2550: Unreachable [Unreachable] (4), pekko://opendaylight-cluster-data@10.30.171.150:2550 -> pekko://opendaylight-cluster-data@10.30.171.168:2550: Unreachable [Unreachable] (4)] 2025-09-23T01:49:55,663 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR is downing [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,6799237801069365872)] 2025-09-23T01:49:55,664 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking unreachable node [pekko://opendaylight-cluster-data@10.30.171.168:2550] as [Down] 2025-09-23T01:49:55,664 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members changed during stable-after period. Resetting timer. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T01:50:02.663691555Z. 2025-09-23T01:49:56,712 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is removing unreachable node [pekko://opendaylight-cluster-data@10.30.171.168:2550] 2025-09-23T01:49:56,713 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberRemoved: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:56,713 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Association | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Association to [pekko://opendaylight-cluster-data@10.30.171.168:2550] with UID [6799237801069365872] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down] 2025-09-23T01:49:56,713 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberRemoved: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:49:58,939 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.171.168:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.171.168/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-09-23T01:49:59,426 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Follower Node2 2025-09-23T01:49:59,641 | INFO | pipe-log:log "ROBOT MESSAGE: Starting ODL2 10.30.171.168" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting ODL2 10.30.171.168 2025-09-23T01:50:06,433 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-765903722]] to [pekko://opendaylight-cluster-data@10.30.170.97:2550] 2025-09-23T01:50:06,434 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.97:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.171.168:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-765903722]] (version [1.0.3]) 2025-09-23T01:50:06,601 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Node [pekko://opendaylight-cluster-data@10.30.171.168:2550] is JOINING, roles [member-2, dc-default], version [0.0.0] 2025-09-23T01:50:07,932 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Leader is moving node [pekko://opendaylight-cluster-data@10.30.171.168:2550] to [Up] 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:50:07,932 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T01:50:07,933 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T01:50:07,934 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T01:50:10,523 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:50:10,523 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:50:10,525 | WARN | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config (Leader) : handleAppendEntriesReply delayed beyond election timeout, appendEntriesReply : AppendEntriesReply{term=4, success=true, followerId=member-2-shard-toaster-config, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, timeSinceLastActivity : 27191, lastApplied : -1, commitIndex : -1 2025-09-23T01:50:10,526 | WARN | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader) : handleAppendEntriesReply delayed beyond election timeout, appendEntriesReply : AppendEntriesReply{term=4, success=false, followerId=member-2-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, timeSinceLastActivity : 27062, lastApplied : 25, commitIndex : 25 2025-09-23T01:50:10,526 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): handleAppendEntriesReply - received unsuccessful reply: AppendEntriesReply{term=4, success=false, followerId=member-2-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, leader snapshotIndex: 24, snapshotTerm: 4, replicatedToAllIndex: 24 2025-09-23T01:50:10,526 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): follower member-2-shard-topology-operational appears to be behind the leader from the last snapshot - updated: matchIndex: -1, nextIndex: 0 2025-09-23T01:50:10,526 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): Initiating install snapshot to follower member-2-shard-topology-operational: follower nextIndex: 0, leader snapshotIndex: 24, leader lastIndex: 25, leader log size: 1 2025-09-23T01:50:10,527 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Initiating snapshot capture CaptureSnapshot [lastAppliedIndex=25, lastAppliedTerm=4, lastIndex=25, lastTerm=4, installSnapshotInitiated=, replicatedToAllIndex=-1, replicatedToAllTerm=-1, unAppliedEntries size=0, mandatoryTrim=false] to install on member-2-shard-topology-operational 2025-09-23T01:50:10,529 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Persising snapshot at EntryInfo[index=25, term=4]/EntryInfo[index=25, term=4] 2025-09-23T01:50:10,529 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Removed in-memory snapshotted entries, adjusted snapshotIndex: 24 and term: 4 2025-09-23T01:50:10,533 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): handleAppendEntriesReply - received unsuccessful reply: AppendEntriesReply{term=4, success=false, followerId=member-2-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, leader snapshotIndex: 24, snapshotTerm: 4, replicatedToAllIndex: 24 2025-09-23T01:50:10,533 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): follower member-2-shard-topology-operational appears to be behind the leader from the last snapshot - updated: matchIndex: -1, nextIndex: 0 2025-09-23T01:50:10,535 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: snapshot is durable as of 2025-09-23T01:50:10.529845573Z 2025-09-23T01:50:10,611 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational (Leader): Snapshot successfully installed on follower member-2-shard-topology-operational (last chunk 1) - matchIndex set to 25, nextIndex set to 26 2025-09-23T01:50:11,029 | INFO | node-cleaner-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:50:22,821 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Follower Node2 Restart 2025-09-23T01:50:26,664 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2700 millis 2025-09-23T01:56:15,213 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node2 2025-09-23T01:56:18,452 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Follower Node2 Restart 2025-09-23T01:56:19,013 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:56:19,212 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-09-23T01:56:19,486 | INFO | opendaylight-cluster-data-notification-dispatcher-73 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-09-23T01:58:00,909 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2 2025-09-23T01:58:01,471 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:58:01,472 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-09-23T01:58:01,977 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-09-23T01:58:03,682 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node 2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node 2 2025-09-23T01:58:04,081 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Follower Node2 Restart 2025-09-23T01:58:06,522 | INFO | sshd-SshServer[e245f73](port=8101)-nio2-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.170.213:45030 authenticated 2025-09-23T01:58:07,438 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/040__Cluster_Current_Term_Verification_3Node_Cluster.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/040__Cluster_Current_Term_Verification_3Node_Cluster.robot 2025-09-23T01:58:07,834 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Check Shard And Get Inventory" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Check Shard And Get Inventory 2025-09-23T01:58:12,729 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Initial Current Term Verification" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Initial Current Term Verification 2025-09-23T01:58:13,121 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Add Bulk Flow From Follower 2025-09-23T01:58:13,424 | INFO | qtp1298175459-533 | StaticConfiguration | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding-over-DOM codec shortcuts are enabled 2025-09-23T01:58:13,459 | INFO | qtp1298175459-533 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Ping Pong Flow Tester Impl 2025-09-23T01:58:13,459 | INFO | qtp1298175459-533 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Transaction Chain Flow Writer Impl 2025-09-23T01:58:13,464 | INFO | ForkJoinPool-10-worker-1 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Number of Txn for dpId: openflow:1 is: 1 2025-09-23T01:58:13,464 | INFO | ForkJoinPool-10-worker-1 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Creating new txChain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@1d1ba6a4 for dpid: openflow:1 2025-09-23T01:58:13,574 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T01:58:13,575 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:58:13,575 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 331.4 μs 2025-09-23T01:58:14,395 | INFO | ForkJoinPool-10-worker-1 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed FlowHandlerTask thread for dpid: openflow:1 2025-09-23T01:58:14,836 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.lang.UnsupportedOperationException: null at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] 2025-09-23T01:58:14,867 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | null 2025-09-23T01:59:13,612 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:59:13,612 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T01:59:13,616 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T01:59:13,616 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T01:59:13,722 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 105.5 ms 2025-09-23T02:00:13,652 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:00:13,653 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:00:13,657 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T02:00:13,658 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:00:13,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=4, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 72.41 ms 2025-09-23T02:00:43,678 | INFO | CommitFutures-7 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed all flows installation for: dpid: openflow:1 in 150218489412ns 2025-09-23T02:00:43,679 | ERROR | CommitFutures-7 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error: TransactionCommitFailedException{message=canCommit encountered an unexpected failure, errorList=[RpcError [message=canCommit encountered an unexpected failure, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-6-txn-0-1, sequence=21, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 149.277100724 seconds. The backend for inventory is not available.]]} in Datastore write operation: dpid: openflow:1, begin tableId: 0, end tableId: 1, sourceIp: 10001 2025-09-23T02:00:43,678 | ERROR | CommitFutures-6 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Transaction chain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@1d1ba6a4 FAILED due to: org.opendaylight.mdsal.common.api.TransactionCommitFailedException: canCommit encountered an unexpected failure at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:42) ~[bundleFile:14.0.13] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:18) ~[bundleFile:14.0.13] at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:98) ~[bundleFile:14.0.14] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.apply(TransactionCommitFailedExceptionMapper.java:37) ~[bundleFile:14.0.13] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker.handleException(ConcurrentDOMDataBroker.java:189) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker$1.onFailure(ConcurrentDOMDataBroker.java:133) ~[bundleFile:?] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1117) ~[bundleFile:?] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1004) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:767) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:516) ~[bundleFile:?] at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:54) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractProxyTransaction.lambda$directCommit$4(AbstractProxyTransaction.java:510) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$handleReplayedModifyTransactionRequest$16(RemoteProxyTransaction.java:510) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.RemoteProxyTransaction.lambda$handleReplayedModifyTransactionRequest$16(RemoteProxyTransaction.java:510) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) ~[?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) ~[?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) ~[?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) ~[?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) ~[?:?] Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-6-txn-0-1, sequence=21, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 149.277100724 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[bundleFile:?] ... 26 more 2025-09-23T02:00:43,987 | WARN | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:00:43,988 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:01:13,692 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:01:13,692 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:01:13,695 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T02:01:13,695 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:01:13,696 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=5, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 568.1 μs 2025-09-23T02:01:14,127 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:01:14,129 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:01:27,029 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2255 millis 2025-09-23T02:01:43,712 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:01:43,712 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:01:43,715 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T02:01:43,715 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:01:43,716 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=6, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 571.1 μs 2025-09-23T02:01:43,757 | WARN | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:01:43,758 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:02:13,731 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:02:13,732 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:02:13,735 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T02:02:13,735 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:02:13,736 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=7, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 600.0 μs 2025-09-23T02:02:13,907 | WARN | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:02:13,909 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:02:43,702 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:02:43,702 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:02:43,705 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-09-23T02:02:43,706 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:02:43,706 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=8, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 427.0 μs 2025-09-23T02:03:36,539 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 4396 millis 2025-09-23T02:04:04,147 | INFO | opendaylight-cluster-data-shard-dispatcher-47 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T02:04:04,147 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T02:04:04,148 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found unreachable members, waiting for stable-after = 7000 ms before taking downing decision. Now 1 unreachable members found. Downing decision will not be made before 2025-09-23T02:04:11.147913734Z. 2025-09-23T02:04:04,402 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as UNREACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T02:04:05,269 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip status from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,9122582790837224611)] 2025-09-23T02:04:05,272 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-35 | ClusterGossip | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Ignoring received gossip status from unreachable [UniqueAddress(pekko://opendaylight-cluster-data@10.30.171.168:2550,9122582790837224611)] 2025-09-23T02:04:05,421 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.97:2550] - Marking node as REACHABLE [Member(pekko://opendaylight-cluster-data@10.30.171.168:2550, Up)]. 2025-09-23T02:04:06,909 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received ReachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR found all unreachable members healed during stable-after period, no downing decision necessary for now. 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-topology-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-default-config 2025-09-23T02:04:06,910 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received ReachableMember: memberName MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.171.168:2550 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-09-23T02:04:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-1-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.171.168:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-09-23T02:04:54,108 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Get Bulk Flows And Verify In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Get Bulk Flows And Verify In Cluster 2025-09-23T02:04:54,842 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:04:54,843 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:04:54,844 | WARN | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-09-23T02:04:54,844 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-36 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-09-23T02:08:50,195 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2061 millis 2025-09-23T02:08:54,803 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.171.168:2550: 2622 millis 2025-09-23T02:11:34,764 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Adding Bulk Flow" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Adding Bulk Flow 2025-09-23T02:11:35,392 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before And After Addition Of Flow" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before And After Addition Of Flow 2025-09-23T02:11:35,869 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete and Add ten percent of the flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete and Add ten percent of the flows for 5 iterations 2025-09-23T02:11:36,243 | INFO | qtp1298175459-509 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Ping Pong Flow Tester Impl 2025-09-23T02:11:36,243 | INFO | qtp1298175459-509 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Transaction Chain Flow Writer Impl 2025-09-23T02:11:36,244 | INFO | ForkJoinPool-10-worker-2 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Number of Txn for dpId: openflow:1 is: 1 2025-09-23T02:11:36,244 | INFO | ForkJoinPool-10-worker-2 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Creating new txChain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@5fede25a for dpid: openflow:1 2025-09-23T02:11:36,256 | INFO | ForkJoinPool-10-worker-2 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed FlowHandlerTask thread for dpid: openflow:1 2025-09-23T02:11:36,259 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} indicated no leadership, reconnecting it org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleRequest(Shard.java:546) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.Shard.handleRequestEnvelope(Shard.java:382) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:329) ~[bundleFile:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[bundleFile:11.0.0] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) [bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) [bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) [bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) [bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) [bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) [bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) [bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) [bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) [bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) [bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) [bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) [bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) [bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) [bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) [bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] 2025-09-23T02:11:36,267 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.171.150:2550/user/shardmanager-config/member-3-shard-inventory-config#-223950070], sessionId=9, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-09-23T02:11:36,267 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-1-frontend-datastore-config: refreshing backend for shard 1 2025-09-23T02:11:36,272 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:37,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:38,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:39,336 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:40,355 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:41,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:42,395 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:43,417 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:44,435 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:45,457 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:46,475 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:47,495 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:48,518 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:49,535 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:50,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:51,575 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:52,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:53,615 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:54,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:55,655 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:56,675 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:57,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:58,716 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:11:59,735 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:00,755 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:01,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:02,795 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:03,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:04,835 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:05,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:06,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:07,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:08,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:09,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:10,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:11,975 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:12,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:14,016 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:15,035 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:16,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:17,075 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:18,100 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:19,115 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:20,136 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:21,155 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:22,175 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:23,195 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:24,215 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:25,235 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:26,255 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:27,275 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:28,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:29,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:30,335 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:31,355 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:32,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:33,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:34,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:35,435 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:36,455 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:37,475 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:38,495 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:39,516 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:40,535 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:41,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:42,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:43,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:44,613 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:45,635 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:46,655 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:47,677 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:48,697 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:49,715 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:50,736 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:51,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:52,775 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:53,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:54,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:55,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:56,855 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:57,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:58,895 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:12:59,915 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:00,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:01,955 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:02,975 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:03,995 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:05,015 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:06,035 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:07,089 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:08,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:09,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:10,146 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:11,165 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:12,209 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:13,225 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:14,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:15,298 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:16,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:17,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:18,360 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:19,375 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:20,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:21,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:22,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:23,473 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:24,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:25,515 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:26,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:27,555 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:28,599 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:29,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:30,635 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:31,656 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:32,675 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:33,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:34,715 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:35,735 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:36,273 | INFO | CommitFutures-9 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed all flows installation for: dpid: openflow:1 in 4926646412281ns 2025-09-23T02:13:36,274 | ERROR | CommitFutures-9 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error: TransactionCommitFailedException{message=canCommit encountered an unexpected failure, errorList=[RpcError [message=canCommit encountered an unexpected failure, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-7-txn-0-1, sequence=2, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 120.014672769 seconds. The backend for inventory is not available.]]} in Datastore write operation: dpid: openflow:1, begin tableId: 0, end tableId: 1, sourceIp: 1001 2025-09-23T02:13:36,273 | ERROR | CommitFutures-8 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Transaction chain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@5fede25a FAILED due to: org.opendaylight.mdsal.common.api.TransactionCommitFailedException: canCommit encountered an unexpected failure at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:42) ~[bundleFile:14.0.13] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:18) ~[bundleFile:14.0.13] at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:98) ~[bundleFile:14.0.14] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.apply(TransactionCommitFailedExceptionMapper.java:37) ~[bundleFile:14.0.13] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker.handleException(ConcurrentDOMDataBroker.java:189) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker$1.onFailure(ConcurrentDOMDataBroker.java:133) ~[bundleFile:?] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1117) ~[bundleFile:?] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1004) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:767) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:516) ~[bundleFile:?] at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:54) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractProxyTransaction.lambda$directCommit$4(AbstractProxyTransaction.java:510) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) ~[?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) ~[?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) ~[?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) ~[?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) ~[?:?] Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-7-txn-0-1, sequence=2, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 120.014672769 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[bundleFile:?] ... 26 more 2025-09-23T02:13:36,756 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:37,775 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:38,795 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:39,815 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:40,835 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:41,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:42,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:43,895 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:44,915 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:45,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:46,955 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:47,974 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:48,995 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:50,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:51,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:52,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:53,075 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:54,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:55,115 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:56,135 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:57,155 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:58,175 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:13:59,196 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:00,214 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:01,235 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:02,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:03,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:04,294 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:05,313 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:06,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:07,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:08,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:09,395 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:10,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:11,435 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:12,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:13,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:14,496 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:15,515 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:16,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:17,555 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:18,575 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:19,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:20,627 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:21,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:22,665 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:23,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:24,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:25,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:26,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:27,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:28,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:29,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:30,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:31,845 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:32,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:33,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:34,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:35,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:36,945 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:37,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:38,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:40,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:41,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:42,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:43,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:44,085 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:45,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:46,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:47,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:48,167 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:49,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:50,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:51,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:52,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:53,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:54,293 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:55,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:56,335 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:57,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:58,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:14:59,395 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:00,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:01,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:02,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:03,475 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:04,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:05,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:06,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:07,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:08,575 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:09,595 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:10,615 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:11,637 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:12,667 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:13,685 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:14,705 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:15,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:16,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:17,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:18,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:19,807 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:20,828 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:21,844 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:22,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:23,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:24,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:25,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:26,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:27,965 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:28,985 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:30,005 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:31,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:32,045 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:33,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:34,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:35,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:36,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:37,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:38,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:39,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:40,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:41,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:42,245 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:43,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:44,286 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:45,305 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:46,325 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:47,345 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:48,378 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:49,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:50,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:51,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:52,455 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:53,475 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:54,495 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:55,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:56,536 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:57,555 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:58,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:15:59,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:00,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:01,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:02,653 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:03,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:04,694 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:05,714 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:06,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:07,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:08,775 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:09,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:10,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:11,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:12,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:13,905 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:14,929 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:15,943 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:16,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:17,985 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:19,005 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:20,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:21,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:22,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:23,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:24,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:25,125 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:26,145 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:27,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:28,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:29,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:30,225 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:31,245 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:32,265 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:33,285 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:34,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:35,323 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:36,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:37,364 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:38,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:39,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:40,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:41,445 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:42,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:43,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:44,505 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:45,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:46,545 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:47,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:48,585 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:49,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:50,625 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:51,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:52,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:53,685 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:54,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:55,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:56,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:57,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:58,785 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:16:59,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:00,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:01,844 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:02,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:03,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:04,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:05,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:06,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:07,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:08,985 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:10,005 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:11,025 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:12,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:13,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:14,085 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:15,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:16,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:17,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:18,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:19,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:20,205 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:21,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:22,245 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:23,265 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:24,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:25,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:26,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:27,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:28,365 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:29,385 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:30,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:31,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:32,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:33,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:34,485 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:35,507 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:36,525 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:37,544 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:38,565 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:39,585 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:40,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:41,625 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:42,645 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:43,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:44,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:45,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:46,725 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:47,746 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:48,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:49,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:50,805 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:51,827 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:52,846 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:53,865 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:54,885 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:55,905 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:56,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:57,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:58,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:17:59,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:01,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:02,025 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:03,045 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:04,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:05,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:06,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:07,126 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:08,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:09,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:10,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:11,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:12,225 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:13,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:14,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:15,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:16,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:17,323 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:17,331 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Continuous Deletion and Addition Of Flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Continuous Deletion and Addition Of Flows for 5 iterations 2025-09-23T02:18:17,962 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before and After Continuous Deletion and Addition Of Flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before and After Continuous Deletion and Addition Of Flows for 5 iterations 2025-09-23T02:18:18,362 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:18,484 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete All Flows From Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete All Flows From Follower Node 2025-09-23T02:18:18,781 | INFO | qtp1298175459-509 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Ping Pong Flow Tester Impl 2025-09-23T02:18:18,782 | INFO | qtp1298175459-509 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Using Transaction Chain Flow Writer Impl 2025-09-23T02:18:18,783 | INFO | ForkJoinPool-10-worker-3 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Number of Txn for dpId: openflow:1 is: 1 2025-09-23T02:18:18,783 | INFO | ForkJoinPool-10-worker-3 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Creating new txChain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@49863130 for dpid: openflow:1 2025-09-23T02:18:18,820 | INFO | ForkJoinPool-10-worker-3 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed FlowHandlerTask thread for dpid: openflow:1 2025-09-23T02:18:19,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:20,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:21,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:22,445 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:23,465 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:24,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:25,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:26,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:27,545 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:28,563 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:29,586 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:30,603 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:31,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:32,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:33,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:34,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:35,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:36,725 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:37,743 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:38,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:39,785 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:40,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:41,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:42,845 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:43,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:44,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:45,905 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:46,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:47,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:48,963 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:49,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:51,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:52,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:53,045 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:54,065 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:55,085 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:56,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:57,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:58,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:18:59,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:00,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:01,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:02,225 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:03,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:04,265 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:05,283 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:06,305 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:07,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:08,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:09,365 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:10,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:11,405 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:12,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:13,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:14,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:15,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:16,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:17,525 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:18,545 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:19,565 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:20,586 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:21,605 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:22,642 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:23,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:24,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:25,705 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:26,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:27,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:28,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:29,785 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:30,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:31,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:32,846 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:33,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:34,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:35,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:36,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:37,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:38,965 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:39,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:41,005 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:42,025 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:43,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:44,065 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:45,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:46,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:47,125 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:48,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:49,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:50,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:51,205 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:52,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:53,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:54,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:55,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:56,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:57,325 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:58,345 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:19:59,365 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:00,385 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:01,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:02,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:03,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:04,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:05,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:06,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:07,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:08,544 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:09,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:10,585 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:11,606 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:12,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:13,643 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:14,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:15,683 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:16,705 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:17,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:18,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:18,832 | ERROR | CommitFutures-10 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Transaction chain: org.opendaylight.mdsal.binding.dom.adapter.BindingDOMTransactionChainAdapter@49863130 FAILED due to: org.opendaylight.mdsal.common.api.TransactionCommitFailedException: canCommit encountered an unexpected failure at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:42) ~[bundleFile:14.0.13] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.newWithCause(TransactionCommitFailedExceptionMapper.java:18) ~[bundleFile:14.0.13] at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:98) ~[bundleFile:14.0.14] at org.opendaylight.mdsal.dom.spi.TransactionCommitFailedExceptionMapper.apply(TransactionCommitFailedExceptionMapper.java:37) ~[bundleFile:14.0.13] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker.handleException(ConcurrentDOMDataBroker.java:189) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.ConcurrentDOMDataBroker$1.onFailure(ConcurrentDOMDataBroker.java:133) ~[bundleFile:?] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1117) ~[bundleFile:?] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1004) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:767) ~[bundleFile:?] at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:516) ~[bundleFile:?] at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:54) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractProxyTransaction.lambda$directCommit$4(AbstractProxyTransaction.java:510) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ConnectionEntry.complete(ConnectionEntry.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:434) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.ClientActorBehavior.onReceiveCommand(ClientActorBehavior.java:47) ~[bundleFile:?] at org.opendaylight.controller.cluster.access.client.AbstractClientActor.onReceiveCommand(AbstractClientActor.java:86) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:269) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) ~[?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) ~[?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) ~[?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) ~[?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) ~[?:?] Caused by: org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-8-txn-0-1, sequence=11, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 120.010777931 seconds. The backend for inventory is not available. at org.opendaylight.controller.cluster.access.client.AbstractClientConnection.lambda$timeoutEntry$0(AbstractClientConnection.java:435) ~[bundleFile:?] ... 26 more 2025-09-23T02:20:18,833 | INFO | CommitFutures-11 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Completed all flows installation for: dpid: openflow:1 in 5329206059084ns 2025-09-23T02:20:18,834 | ERROR | CommitFutures-11 | FlowWriterTxChain | 297 - org.opendaylight.openflowplugin.applications.bulk-o-matic - 0.20.0 | Error: TransactionCommitFailedException{message=canCommit encountered an unexpected failure, errorList=[RpcError [message=canCommit encountered an unexpected failure, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.cluster.access.client.RequestTimeoutException: ModifyTransactionRequest{target=member-1-datastore-config-fe-1-chn-8-txn-0-1, sequence=11, replyTo=Actor[pekko://opendaylight-cluster-data/user/$a#-1837282775], modifications=0, protocol=SIMPLE} timed out after 120.010777931 seconds. The backend for inventory is not available.]]} in Datastore write operation: dpid: openflow:1, begin tableId: 0, end tableId: 1, sourceIp: 10001 2025-09-23T02:20:19,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:20,785 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:21,805 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:22,825 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:23,845 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:24,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:25,885 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:26,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:27,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:28,945 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:29,978 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:30,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:32,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:33,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:34,055 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:35,098 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:36,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:37,134 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:38,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:39,174 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:40,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:41,214 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:42,233 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:43,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:44,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:45,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:46,316 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:47,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:48,353 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:49,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:50,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:51,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:52,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:53,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:54,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:55,495 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:56,515 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:57,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:58,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:20:59,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:00,606 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:01,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:02,645 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:03,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:04,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:05,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:06,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:07,743 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:08,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:09,783 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:10,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:11,823 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:12,844 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:13,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:14,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:15,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:16,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:17,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:18,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:19,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:21,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:22,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:23,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:24,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:25,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:26,103 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:27,123 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:28,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:29,163 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:30,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:31,203 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:32,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:33,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:34,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:35,285 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:36,305 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:37,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:38,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:39,363 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:40,383 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:41,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:42,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:43,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:44,465 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:45,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:46,505 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:47,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:48,544 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:49,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:50,585 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:51,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:52,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:53,645 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:54,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:55,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:56,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:57,725 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:58,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:21:59,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:00,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:01,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:02,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:03,845 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:04,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:05,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:06,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:07,925 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:08,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:09,966 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:10,985 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:12,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:13,025 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:14,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:15,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:16,085 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:17,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:18,126 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:19,145 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:20,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:21,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:22,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:23,225 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:24,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:25,265 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:26,285 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:27,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:28,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:29,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:30,365 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:31,385 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:32,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:33,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:34,445 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:35,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:36,486 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:37,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:38,525 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:39,545 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:40,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:41,584 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:42,603 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:43,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:44,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:45,665 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:46,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:47,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:48,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:49,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:50,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:51,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:52,840 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:53,855 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:54,873 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:55,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:56,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:57,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:58,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:22:59,973 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:00,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:02,013 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:03,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:04,055 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:05,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:06,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:07,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:08,134 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:09,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:10,173 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:11,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:12,215 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:13,235 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:14,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:15,273 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:16,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:17,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:18,335 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:19,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:20,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:21,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:22,413 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:23,435 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:24,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:25,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:26,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:27,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:28,535 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:29,555 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:30,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:31,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:32,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:33,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:34,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:35,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:36,693 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:37,715 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:38,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:39,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:40,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:41,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:42,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:43,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:44,853 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:45,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:46,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:47,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:48,933 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:49,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:50,975 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:51,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:53,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:54,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:55,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:56,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:57,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:58,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:23:59,135 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:00,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:01,174 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:02,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:03,214 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:04,234 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:05,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:06,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:07,294 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:08,314 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:09,335 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:10,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:11,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:12,395 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:13,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:14,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:15,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:16,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:17,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:18,516 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:19,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:20,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:21,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:22,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:23,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:24,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:25,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:26,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:27,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:28,713 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:29,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:30,755 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:31,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:32,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:33,815 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:34,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:35,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:36,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:37,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:38,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:39,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:40,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:41,973 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:42,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:44,015 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:45,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:46,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:47,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:48,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:49,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:50,134 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:51,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:52,174 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:53,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:54,214 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:55,234 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:56,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:57,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:58,294 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:59,314 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:24:59,504 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Verify No Flows In Cluster After Flow Deletion" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Verify No Flows In Cluster After Flow Deletion 2025-09-23T02:25:00,337 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:01,355 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:02,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:03,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:04,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:05,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:06,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:07,475 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:08,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:09,515 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:10,535 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:11,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:12,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:13,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:14,613 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:15,635 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:16,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:17,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:18,694 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:19,714 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:20,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:21,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:22,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:23,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:24,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:25,835 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:26,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:27,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:28,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:29,915 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:30,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:31,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:32,974 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:33,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:35,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:36,035 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:37,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:38,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:39,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:40,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:41,135 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:42,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:43,175 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:44,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:45,214 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:46,234 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:47,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:48,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:49,294 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:50,315 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:51,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:52,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:53,375 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:54,395 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:55,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:56,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:57,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:58,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:25:59,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:00,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:01,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:02,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:03,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:04,593 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:05,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:06,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:07,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:08,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:09,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:10,715 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:11,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:12,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:13,773 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:14,795 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:15,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:16,835 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:17,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:18,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:19,893 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:20,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:21,933 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:22,953 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:23,974 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:24,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:26,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:27,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:28,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:29,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:30,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:31,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:32,134 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:33,155 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:34,175 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:35,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:36,215 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:37,235 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:38,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:39,275 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:40,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:41,314 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:42,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:43,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:44,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:45,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:46,414 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:47,433 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:48,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:49,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:50,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:51,515 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:52,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:53,555 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:54,575 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:55,595 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:56,615 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:57,635 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:58,655 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:26:59,676 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:00,694 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:01,714 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:02,736 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:03,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:04,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:05,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:06,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:07,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:08,855 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:09,877 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:10,895 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:11,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:12,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:13,955 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:14,975 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:15,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:17,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:18,033 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:19,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:20,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:21,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:22,129 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:23,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:24,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:25,185 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:26,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:27,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:28,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:29,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:30,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:31,311 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:32,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:33,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:34,364 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:35,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:36,405 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:37,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:38,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:39,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:40,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:41,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:42,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:43,544 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:44,566 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:45,584 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:46,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:47,625 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:48,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:49,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:50,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:51,705 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:52,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:53,745 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:54,765 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:55,785 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:56,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:57,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:58,844 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:27:59,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:00,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:01,904 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:02,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:03,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:04,921 | INFO | sshd-SshServer[e245f73](port=8101)-timer-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Disconnecting(ServerSessionImpl[karaf@/10.30.170.213:43340]): SSH2_DISCONNECT_PROTOCOL_ERROR - Detected IdleTimeout after 1800831/1800000 ms. 2025-09-23T02:28:04,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:05,985 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:07,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:08,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:09,043 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:10,063 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:11,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:12,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:13,125 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:14,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:15,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:16,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:17,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:18,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:19,245 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:20,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:21,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:22,305 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:23,324 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:24,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:25,364 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:26,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:27,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:28,424 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:29,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:30,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:31,486 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:32,503 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:33,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:34,545 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:35,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:36,583 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:37,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:38,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:39,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:40,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:41,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:42,703 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:43,725 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:44,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:45,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:46,784 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:47,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:48,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:49,844 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:50,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:51,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:52,903 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:53,923 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:54,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:55,964 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:56,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:58,005 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:28:59,024 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:00,044 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:01,063 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:02,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:03,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:04,123 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:05,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:06,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:07,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:08,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:09,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:10,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:11,264 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:12,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:13,304 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:14,325 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:15,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:16,364 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:17,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:18,403 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:19,425 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:20,445 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:21,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:22,484 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:23,504 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:24,524 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:25,544 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:26,564 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:27,584 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:28,604 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:29,624 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:30,644 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:31,664 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:32,684 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:33,704 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:34,724 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:35,744 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:36,764 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:37,783 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:38,804 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:39,823 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:40,843 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:41,864 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:42,884 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:43,906 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:44,924 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:45,944 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:46,965 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:47,984 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:49,004 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:50,025 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:51,045 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:52,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:53,085 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:54,105 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:55,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:56,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:57,164 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:58,184 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:29:59,204 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:00,224 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:01,244 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:02,263 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:03,284 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:04,306 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:05,323 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:06,344 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:07,364 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:08,384 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:09,404 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:10,426 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:11,444 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:12,464 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:13,499 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:14,535 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:15,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:16,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:17,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:18,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:19,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:20,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:21,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:22,694 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:23,713 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:24,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:25,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:26,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:27,795 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:28,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:29,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:30,854 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:31,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:32,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:33,913 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:34,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:35,954 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:36,974 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:37,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:39,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:40,034 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:41,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:42,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:43,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:44,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:45,135 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:46,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:47,174 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:48,195 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:49,213 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:50,234 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:51,254 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:52,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:53,294 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:54,314 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:55,334 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:56,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:57,375 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:58,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:30:59,415 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:00,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:01,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:02,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:03,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:04,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:05,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:06,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:07,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:08,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:09,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:10,635 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:11,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:12,675 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:13,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:14,714 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:15,733 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:16,755 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:17,774 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:18,794 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:19,814 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:20,834 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:21,855 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:22,874 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:23,894 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:24,914 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:25,934 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:26,955 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:27,973 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:28,994 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:30,014 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:31,035 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:32,054 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:33,074 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:34,094 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:35,114 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:36,134 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:37,154 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:38,174 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:39,194 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:40,215 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:40,339 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.PreLeader Verification" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.PreLeader Verification 2025-09-23T02:31:41,234 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:42,255 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:43,274 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:44,295 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:45,314 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:46,333 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:47,354 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:48,374 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:49,394 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:50,413 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:51,434 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:52,454 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:53,474 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:54,494 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:55,514 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:56,534 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:57,554 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:58,574 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:31:59,594 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:00,614 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:01,634 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:02,654 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:03,674 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:04,695 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:05,714 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:06,734 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:07,754 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:08,777 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:09,824 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:10,845 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:12,040 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:13,064 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:14,084 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:15,104 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:16,124 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-09-23T02:32:17,144 | WARN | ForkJoinPool.commonPool-worker-7 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config#-223950070] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more